WO2023000787A1 - 视频处理方法、装置、电子设备及计算机可读存储介质 - Google Patents

视频处理方法、装置、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2023000787A1
WO2023000787A1 PCT/CN2022/092815 CN2022092815W WO2023000787A1 WO 2023000787 A1 WO2023000787 A1 WO 2023000787A1 CN 2022092815 W CN2022092815 W CN 2022092815W WO 2023000787 A1 WO2023000787 A1 WO 2023000787A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
target
preset
real
pushed
Prior art date
Application number
PCT/CN2022/092815
Other languages
English (en)
French (fr)
Inventor
唐建东
刘鑫蕊
Original Assignee
苏州景昱医疗器械有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州景昱医疗器械有限公司 filed Critical 苏州景昱医疗器械有限公司
Publication of WO2023000787A1 publication Critical patent/WO2023000787A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present application relates to the technical field of image processing, and in particular to a video processing method, device, electronic equipment, and computer-readable storage medium.
  • the purpose of the present application is to provide a video processing method, device, electronic equipment and computer-readable storage medium, so as to solve the problem that in the prior art, video data clarity cannot be differentiated and streamed according to the type of disease when doctors and patients communicate.
  • the present application provides a video processing method, the method comprising: acquiring real-time video data obtained by shooting a target object with a camera; acquiring the disease type of the target object; According to the disease type of the object, the streaming strategy of the real-time video data is obtained, and the streaming strategy is used to indicate the first preset definition; based on the streaming strategy, the data to be pushed corresponding to the real-time video data is determined ; Push the data to be pushed to the doctor's device.
  • the method further includes: determining the target site of the target object based on the disease type of the target object;
  • the data to be pushed includes: adjusting the definition of the first part of the real-time video data corresponding to the target part based on the first preset definition, so as to obtain the data to be pushed.
  • the method further includes: acquiring a second preset resolution, where the second preset resolution is smaller than the first preset resolution; Definition, adjusting the definition of the first part of the real-time video data corresponding to the target part to obtain the data to be streamed, including: based on the first preset definition, adjusting the definition of the first part of the corresponding target part performing definition adjustment on the first part of the real-time video data to obtain the first part of the data to be streamed; based on the second preset definition, performing definition adjustment on the second part of the real-time video data to obtain As for the second part of the data to be streamed, the second part of the real-time video data is a part or all of the real-time video data other than the first part.
  • the pushing the streaming data to be pushed to the doctor's device includes: based on the first part and the second part of the streaming data to be pushed, synthesizing the data to be pushed and push to the doctor device; or, respectively push the first part and the second part of the data to be pushed to the doctor device.
  • the determining the target site of the target object based on the disease type of the target object includes: acquiring training data of a plurality of sample objects, the training data of each sample object includes the The disease type and target site of the sample object; using the training data of the plurality of sample objects to train the deep learning model to obtain the target site classification model; input the disease type of the target object into the target site classification model to obtain the target site The target part of the object.
  • the method further includes: acquiring the network bandwidth data corresponding to the target object; and acquiring the streaming strategy of the real-time video data based on the disease type of the target object, including: Based on the disease type of the target object and the network bandwidth data corresponding to the target object, the streaming strategy of the real-time video data is acquired.
  • the method further includes: based on the disease type of the target object, acquiring a display strategy of the streaming data to be pushed, where the display strategy is used to indicate a preset size, preset location, One or more of preset brightness, preset contrast and preset saturation; based on the display strategy, determine the data to be displayed corresponding to the data to be streamed; display the data to be displayed by using a display device.
  • the display strategy is used to indicate the preset size; the method further includes: determining the target site of the target object based on the disease type of the target object;
  • the display strategy, determining the data to be displayed corresponding to the data to be pushed includes: based on the preset size, zooming the first part of the data to be pushed corresponding to the target part to obtain the data to be displayed data, so that the size of the target site displayed on the display device is not smaller than the preset size, and the first part of the data to be streamed corresponding to the target site is completely displayed on the display device .
  • the display strategy is also used to indicate the preset position; and based on the preset size, the first part of the streaming data to be pushed corresponding to the target part is zoomed , to obtain the data to be displayed, including: based on the preset size, zooming the first part of the data to be streamed corresponding to the target part to obtain the data to be translated; based on the preset position, scaling the The data to be translated is translated to obtain the data to be displayed, so that the target part is displayed at a preset position in the display device.
  • the present application provides a video processing device, the device comprising: a video acquisition module, used to acquire real-time video data, the real-time video data is obtained by shooting a target object with a camera; a disease type module, used to acquire The disease type of the target object; a streaming strategy module, configured to acquire a streaming strategy for the real-time video data based on the disease type of the target object, where the streaming strategy is used to indicate a first preset definition;
  • the data to be pushed module is used to determine the data to be pushed corresponding to the real-time video data based on the push strategy; the data push module is used to push the data to be pushed to the doctor equipment.
  • the device further includes: a target site module, configured to determine the target site of the target object based on the disease type of the target object; the first preset definition, and adjust the definition of the first part of the real-time video data corresponding to the target part to obtain the streaming data to be pushed.
  • a target site module configured to determine the target site of the target object based on the disease type of the target object
  • the first preset definition configured to adjust the definition of the first part of the real-time video data corresponding to the target part to obtain the streaming data to be pushed.
  • the device further includes: a sharpness acquisition module, configured to acquire a second preset sharpness, where the second preset sharpness is smaller than the first preset sharpness;
  • the data to be pushed module includes: a first streaming adjustment unit, configured to adjust the definition of the first part of the real-time video data corresponding to the target part based on the first preset definition, to obtain the to-be-streamed data.
  • the first part of the streaming data the second streaming adjustment unit, configured to adjust the definition of the second part of the real-time video data based on the second preset definition, to obtain the first part of the streaming data to be pushed
  • the second part of the real-time video data is the part or all of the real-time video data other than the first part.
  • the data pushing module includes: a synthesis pushing unit, configured to synthesize and push the data to be pushed based on the first part and the second part of the data to be pushed to the doctor device; or, a streaming unit, configured to push the first part and the second part of the data to be pushed to the doctor device respectively.
  • the target site module includes: a training data unit, configured to acquire training data of a plurality of sample objects, where the training data of each sample object includes the disease type and target site of the sample object;
  • the model training unit is used to use the training data of the plurality of sample objects to train the deep learning model to obtain the target part classification model;
  • the type input unit is used to input the disease type of the target object into the target part classification model to obtain The target site of the target object.
  • the device further includes: a network bandwidth module, configured to acquire network bandwidth data corresponding to the target object; The network bandwidth data corresponding to the target object is obtained to obtain the pushing policy of the real-time video data.
  • the device further includes: a display policy module, configured to acquire a display policy of the stream data to be pushed based on the disease type of the target object, and the display policy is used to indicate a preset One or more of size, preset position, preset brightness, preset contrast, and preset saturation; the data to be displayed module is used to determine the data to be displayed corresponding to the data to be pushed based on the display strategy Data; a data display module, configured to use a display device to display the data to be displayed.
  • a display policy module configured to acquire a display policy of the stream data to be pushed based on the disease type of the target object, and the display policy is used to indicate a preset One or more of size, preset position, preset brightness, preset contrast, and preset saturation
  • the data to be displayed module is used to determine the data to be displayed corresponding to the data to be pushed based on the display strategy Data
  • a data display module configured to use a display device to display the data to be displayed.
  • the display strategy is used to indicate the preset size; the device further includes: a target site module, configured to determine the target size of the target object based on the disease type of the target object location; the data-to-be-displayed module is used to scale the first part of the data to be streamed corresponding to the target location based on the preset size, to obtain the data to be displayed, so that the target location can be displayed
  • the size in the display device is not smaller than the preset size, and the first part of the data to be streamed corresponding to the target site is completely displayed in the display device.
  • the display strategy is also used to indicate the preset position;
  • the data to be displayed module includes: a data scaling unit, configured to correspond to the target position based on the preset size The first part of the data to be pushed is zoomed to obtain the data to be translated; the data translation unit is used to translate the data to be translated based on the preset position to obtain the data to be displayed, so that all The target site is displayed at a preset position on the display device.
  • the present application provides an electronic device, the electronic device includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of any one of the above methods when executing the computer program.
  • the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of any one of the above-mentioned methods are implemented.
  • the streaming strategy corresponds to the disease type, and different first preset resolutions can be set for different disease types. Based on the disease type of the target object, real-time video data is pushed in a differentiated manner. When the disease type corresponds to a low When the first preset definition is the first preset definition, data streaming is performed at a lower resolution, occupying as little bandwidth as possible.
  • the disease type corresponds to a higher first preset definition
  • data streaming is performed at a higher resolution.
  • it meets the needs of doctors for differentiated observation of patients with different disease types. It has a high level of intelligence.
  • a lower first preset definition is set, thereby satisfying the doctor's requirement of clearly observing the patient's condition while occupying as little bandwidth as possible.
  • FIG. 1 is a schematic flow chart of a video processing method provided in an embodiment of the present application
  • Fig. 2 is a schematic flow chart of a patient-end video processing method provided by an embodiment of the present application
  • Fig. 3 is a schematic flowchart of a video processing method at the doctor end provided by an embodiment of the present application
  • Fig. 4 is a schematic flow chart of another video processing method provided by the embodiment of the present application.
  • Fig. 5 is a schematic flow chart of another video processing method provided by the embodiment of the present application.
  • Fig. 6 is a schematic flow diagram of obtaining streaming data to be pushed according to an embodiment of the present application
  • FIG. 7 is a schematic flow chart of streaming data to be streamed according to an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of acquiring a target part of a target object provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of another video processing method provided by the embodiment of the present application.
  • FIG. 10 is a partial flow diagram of another video processing method provided by the embodiment of the present application.
  • FIG. 11 is a partial flowchart of another video processing method provided by the embodiment of the present application.
  • Fig. 12 is a partial flow diagram of another video processing method provided by the embodiment of the present application.
  • Fig. 13 is a schematic flow diagram of acquiring data to be displayed provided by an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of a video processing device provided by an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of another video processing device provided by an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of another video processing device provided by an embodiment of the present application.
  • Fig. 17 is a schematic structural diagram of a data module to be pushed provided by an embodiment of the present application.
  • Fig. 18 is a schematic structural diagram of a data streaming module provided by an embodiment of the present application.
  • Fig. 19 is a schematic structural diagram of a target site module provided by an embodiment of the present application.
  • FIG. 20 is a schematic structural diagram of another video processing device provided by an embodiment of the present application.
  • Fig. 21 is a partial structural schematic diagram of another video processing device provided by an embodiment of the present application.
  • Fig. 22 is a schematic structural diagram of a data module to be displayed provided by an embodiment of the present application.
  • Fig. 23 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 24 is a schematic structural diagram of a program product for implementing a video processing method provided by an embodiment of the present application.
  • At least one means one or more, and “multiple” means two or more.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the contextual objects are an “or” relationship.
  • “At least one of the following” or similar expressions refer to any combination of these items, including any combination of single or plural items.
  • At least one item (piece) of a, b or c can represent: a, b, c, a and b, a and c, b and c or a and b and c, wherein a, b and c can be It can be single or multiple. It should be noted that "at least one item (item)” can also be interpreted as “one item (item) or multiple items (item)”.
  • words such as “exemplary” or “for example” are used to mean an example, illustration or illustration. Any embodiment or design described herein as “exemplary” or “for example” is not to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present related concepts in a concrete manner.
  • Figure 1 is a schematic flow chart of a video processing method provided by an embodiment of the present application
  • Figure 2 is a schematic flow chart of a patient-side video processing method provided by an embodiment of the present application
  • Figure 3 is a schematic flow chart of a video processing method provided by an embodiment of the present application.
  • An embodiment of the present application provides a video processing method, and the method includes steps S101-S105.
  • Step S101 Acquiring real-time video data obtained by shooting a target object with a camera.
  • the camera may include, for example, an optical camera and/or an infrared camera.
  • the target objects are patients, such as patients with Parkinson's disease, depression, bipolar disorder, or patients with other diseases. All diseases observed by doctors on patients are within the applicable scope of the embodiments of the present application.
  • Step S102 Obtain the disease type of the target object.
  • the disease type of the target object may include at least one of Parkinson's disease, depression, and bipolar disorder, for example.
  • Step S103 Based on the disease type of the target object, acquire a streaming strategy for the real-time video data, where the streaming strategy is used to indicate a first preset definition.
  • the sharpness involved in the embodiments of this application is an index that characterizes the clearness of video data, and in general, it can be equivalent to resolution processing.
  • a preset resolution is, for example, 2000 pixels ⁇ 3000 pixels, 960 pixels ⁇ 540 pixels, 1920 pixels ⁇ 1080 pixels, etc.
  • a second preset resolution is, for example, 2000 pixels ⁇ 3000 pixels, 960 pixels ⁇ 540 pixels, 1920 pixels ⁇ 1080 pixels, etc. pixels etc.
  • Step S104 Based on the streaming strategy, determine the data to be pushed corresponding to the real-time video data.
  • the data to be pushed refers to the video data waiting to be pushed to the doctor's equipment, which may be the real-time video data itself, or the video data obtained after data processing is performed on the real-time video data.
  • Step S105 Push the data to be pushed to the doctor's device.
  • Step S105 may include: pushing the data to be pushed to a server, so that the server pushes the device to be pushed to the doctor's device.
  • the doctor device refers to a terminal device used by a doctor, such as a mobile phone, a tablet computer, a computer, a smart wearable device, and the like.
  • the doctor device is used for displaying streaming data to be pushed.
  • the physician device is also used to remotely program the patient.
  • remote program control refers to the program control in which the doctor and the patient are not in the same space, for example, the doctor is in the hospital and the patient is at home.
  • streaming refers to the process of transmitting the packaged content in the acquisition stage to the server.
  • the function of streaming includes transmitting data to the server, and then to the doctor's device through the server. If the stream is not pushed, the doctor's device will not be able to display the corresponding screen.
  • the doctor equipment may be provided with one or more display screens.
  • the multiple display screens can be arranged in the shape of M rows and N columns, and closely fit to form a flat or nearly flat display area, so that the user cannot see the doctor's equipment. Instead of seeing gaps between multiple displays, it's like viewing a monolithic display.
  • the plurality of display screens have the same shape and structure, and are arranged in a shape of 4 rows and 6 columns.
  • the plurality of display screens have the same shape and structure, and are arranged in a shape of 3 rows and 3 columns.
  • the video image of the patient can be displayed in real time by pulling the stream.
  • Streaming refers to the process in which the server already has live broadcast content and uses a specified address to pull it.
  • the camera uses the camera to shoot the target object to obtain real-time video data, and obtain the corresponding real-time video data push strategy for the disease type of the target object, and determine the corresponding real-time video data corresponding to the push flow data based on the push flow strategy and push the flow to
  • the streaming strategy corresponds to the disease type, and different first preset resolutions can be set for different disease types.
  • Real-time video data is pushed in a differentiated manner based on the disease type of the target object. When the disease type When the first preset definition is lower, push the data at a lower resolution and occupy as little bandwidth as possible.
  • a higher first preset definition can be set for Parkinson's disease types , setting a lower first preset resolution for the disease type of depression, thereby satisfying the doctor's requirement for clearly observing the patient's condition while occupying as little bandwidth as possible.
  • the embodiment of the present application does not limit the streaming data to be pushed, which may be obtained by adjusting the definition of only a part of the real-time video data, or may be obtained by adjusting the definition of all the real-time video data.
  • the doctor's device can display the patient's video image in real time, and the doctor uses the implanted neurostimulation system to treat the patient.
  • the implantable neurostimulation system mainly includes a stimulator implanted in the body and a program-controlled device outside the body.
  • the existing neuromodulation technology mainly uses stereotaxic surgery to implant electrodes in specific structures (i.e., targets) in the body, and the stimulator implanted in the patient sends electrical pulses to the targets through the electrodes to regulate the corresponding neural structures and networks. Electrical activity and its function, thereby improving symptoms and relieving pain.
  • the stimulator can be an implantable electrical nerve stimulation device, an implantable cardiac electrical stimulation system (also known as a cardiac pacemaker), an implantable drug infusion device (Implantable Drug Delivery System, referred to as IDDS) and a wire switch. any one of the connected devices.
  • Implantable electrical nerve stimulation devices are, for example, Deep Brain Stimulation (DBS), Implantable Cortical Nerve Stimulation (CNS), Implantable Spinal Cord Stimulation , referred to as SCS), implanted sacral nerve stimulation system (Sacral Nerve Stimulation, referred to as SNS), implanted vagus nerve stimulation system (Vagus Nerve Stimulation, referred to as VNS), etc.
  • DBS Deep Brain Stimulation
  • CNS Implantable Cortical Nerve Stimulation
  • SCS Implantable Spinal Cord Stimulation
  • SNS implanted sacral nerve stimulation system
  • Vagus Nerve Stimulation referred to as VNS
  • the stimulator can include IPG, extension wires and electrode wires.
  • IPG implantable pulse generator, implantable pulse generator
  • the extension lead and electrode lead provide one or two controllable specific electrical stimulation energy for specific areas of biological tissues.
  • the extension lead is used in conjunction with the IPG as a transmission medium for the electrical stimulation signal, and transmits the electrical stimulation signal generated by the IPG to the electrode lead.
  • the electrode lead releases the electrical stimulation signal generated by IPG to a specific area of the biological tissue through multiple electrode contacts;
  • the implantable medical device has one or more electrode leads on one or both sides,
  • the electrode wires are provided with a plurality of electrode contacts, and the electrode contacts can be arranged uniformly or non-uniformly in the circumferential direction of the electrode wires.
  • the electrode contacts are arranged in an array of 4 rows and 3 columns (a total of 12 electrode contacts) in the circumferential direction of the electrode wire.
  • Electrode contacts may include stimulation electrode contacts and/or collection electrode contacts.
  • the electrode contacts can be in the shape of, for example, a sheet, a ring, or a dot.
  • the stimulated biological tissue may be the patient's brain tissue, and the stimulated part may be a specific part of the brain tissue.
  • the stimulated site is generally different, the number of stimulation contacts used (single source or multi-source), one or more channels (single-channel or multi-channel) specific electrical stimulation signals
  • the application and stimulus parameter data are also different. This application does not limit the applicable disease types, which may be the applicable disease types for deep brain stimulation (DBS), spinal cord stimulation (SCS), pelvic stimulation, gastric stimulation, peripheral nerve stimulation, and functional electrical stimulation.
  • DBS disorders that DBS can be used to treat or manage include, but are not limited to: spasticity disorders (e.g., epilepsy), pain, migraine, psychiatric disorders (e.g., major depressive disorder (MDD)), bipolar disorder, anxiety disorders, Post-traumatic stress disorder, hypodepression, obsessive-compulsive disorder (OCD), conduct disorder, mood disorder, memory disorder, mental status disorder, mobility disorder (eg, essential tremor or Parkinson's disease), Huntington's disease, Al Alzheimer's disease, drug addiction disorder, autism, or other neurological or psychiatric conditions and impairments.
  • spasticity disorders e.g., epilepsy
  • DMDD major depressive disorder
  • bipolar disorder e.g., anxiety disorders, Post-traumatic stress disorder, hypodepression, obsessive-compulsive disorder (OCD)
  • CCD obsessive-compulsive disorder
  • conduct disorder mood disorder, memory disorder, mental status disorder, mobility disorder (eg, essential tre
  • the stimulator in this application is described by taking the deep brain stimulator (DBS) as an example.
  • the program-controlled device can be used to adjust the stimulation parameters of the electrical stimulation signal of the stimulator, or the stimulator can sense The bioelectric activity in the deep brain of the patient can be measured, and the stimulation parameters of the electrical stimulation signal of the stimulator can be adjusted continuously through the sensed bioelectric activity.
  • the stimulation parameters of the electrical stimulation signal can include frequency (for example, the number of electrical stimulation pulse signals per unit time 1s, the unit is Hz), pulse width (the duration of each pulse, the unit is ⁇ s) and amplitude (generally, voltage Expression, that is, the intensity of each pulse, the unit is any one or more of V).
  • each stimulation parameter of the stimulator can be adjusted in current mode or voltage mode (to achieve refined treatment for patients).
  • Fig. 4 is a schematic flowchart of another video processing method provided by the embodiment of the present application.
  • the method may further include step S106: based on the disease type of the target object, determine the Describe the target site of the target object.
  • the target site is, for example, the face, eyes, nose, mouth, ears, fingers, arms, feet, legs, back and the like.
  • the step S104 may include: adjusting the definition of the first part of the real-time video data corresponding to the target part based on the first preset definition, so as to obtain the streaming data to be pushed.
  • the target part is the body part that the doctor is more concerned about and can reflect the patient's condition.
  • the target part is determined by the disease type of the target object, and the definition of a part of the real-time video data corresponding to the target part is adjusted to obtain the streaming data to be pushed.
  • the resolution of the target part in the data to be streamed is the first preset resolution, and an appropriate first preset resolution can be set in advance to ensure that the target part that the doctor cares about can be presented with the required resolution.
  • Fig. 5 is a schematic flow chart of another video processing method provided by the embodiment of the present application
  • Fig. 6 is a schematic flow chart of obtaining streaming data to be pushed provided by the embodiment of the present application, in some possible
  • the method may further include step S107: acquiring a second preset definition, where the second preset definition is smaller than the first preset definition.
  • the second preset resolution may be 1000 pixels ⁇ 2000 pixels, 2000 pixels ⁇ 1500 pixels, 1000 pixels ⁇ 3000 pixels and so on.
  • the step S104 may include steps S201-S202.
  • Step S201 Based on the first preset definition, adjust the definition of the first part of the real-time video data corresponding to the target part to obtain the first part of the data to be streamed.
  • Step S202 Based on the second preset definition, adjust the definition of the second part of the real-time video data to obtain the second part of the streaming data to be pushed, and the second part of the real-time video data is Part or all of the real-time video data other than the first part.
  • the first part of the real-time video data corresponding to the target part is, for example, video data containing a finger (it can be a single finger, or multiple fingers, or multiple fingers+palm).
  • the second part that is, the part other than the first part in the real-time video data or all is video data that does not contain fingers (for example, the background part after erasing the entire portrait, or the portrait part and the background part after erasing the patient's fingers).
  • the first part of the real-time video data corresponding to the target part is, for example, video data containing eyes (can be a single pair of eyes, or the whole face), at this time, the second part is the first part of the real-time video data
  • Other parts or all are video data that do not include eyes (for example, the background part of the portrait after erasing, or the portrait part and the background part of the patient's eyes after erasing).
  • the first part and the second part of the above-mentioned real-time video data can be intercepted from the original real-time video data, and the definition is adjusted respectively to obtain the first part and the second part of the data to be pushed, and the first part of the data to be pushed
  • the clarity of the data is higher than that of the second part of the data to be streamed.
  • doctors generally do not have high requirements for the clarity of the video data other than the target part, so the first part of the real-time video data corresponding to the target part and the part or all of the second part other than the first part are respectively differentiated
  • the definition adjustment when the doctor observes the patient remotely, the target part and the parts other than the target part can be presented with different definitions, among which, the target part can be presented with a higher definition to ensure that the doctor can clearly observe the patient's needs, the target Parts other than parts can be presented in lower definition, reducing the amount of data in the data streaming process and further reducing the bandwidth occupation.
  • the degree of intelligence is greatly improved.
  • the embodiment of the present application does not limit the way of pushing the stream, and the two parts of data may be combined or pushed separately.
  • FIG. 7 is a schematic flow chart of pushing data to be pushed according to an embodiment of the present application.
  • the step S105 may include step S301 or S302 .
  • Step S301 Based on the first part and the second part of the data to be pushed, synthesize the data to be pushed and push it to the doctor device.
  • Step S302 respectively push the first part and the second part of the data to be pushed to the doctor device.
  • step S302 may include: pushing the first part and the second part of the streaming data to be pushed to the server respectively, so that the server pushes the first part and the second part of the streaming data to be pushed to the doctor respectively device, so that the doctor's device can synthesize a complete video image based on the first part and the second part of the streaming data to be pushed and display it to the doctor.
  • the advantage of this is that compared to transmitting the data to be streamed as a whole with a higher definition, the second part of the data to be streamed is transmitted at a lower resolution, which greatly reduces the transmission cost.
  • the total amount of data in the process (pushed to the server first, and then pushed to the doctor's equipment by the server) (because fingers, eyes and other parts usually account for a small proportion of the entire real-time video data, the second part of the real-time video data is usually larger than the first part)
  • the amount of data is much larger), which improves the efficiency of data push, reduces the amount of data download per unit time, and reduces the probability of the doctor’s equipment being stuck.
  • Parts of interest are shown in high-definition so that doctors can observe them clearly, while other parts have lower-definition displays that create a strong contrast and allow doctors to focus more on On the parts of concern, observe the conditions of these parts more attentively, and effectively improve the doctor's treatment effect from the technical means.
  • the first part of the data to be pushed corresponding to the target site and the second part of the data to be pushed corresponding to parts other than the target site can be synthesized to obtain the data to be pushed and then pushed, or they can be pushed separately.
  • the embodiment of the present application does not limit the way to obtain the target part of the target object. It can be manually entered by the doctor through the doctor's equipment, or imported (or read) the data in the database through the data interface, or the depth Learn techniques to acquire.
  • FIG. 8 is a schematic flowchart of acquiring a target part of a target object according to an embodiment of the present application.
  • the step S106 may include steps S401 - S403 .
  • Step S401 Obtain training data of multiple sample objects, the training data of each sample object includes the disease type and target site of the sample object.
  • the training data of the sample objects can be real data collected from real patients, or pseudo-real data generated by artificial intelligence algorithms.
  • Step S402 Using the training data of the plurality of sample objects to train a deep learning model to obtain a target part classification model.
  • Step S403 Input the disease type of the target object into the target part classification model to obtain the target part of the target object.
  • the target part classification model is obtained by training the deep learning model.
  • the target part of the target object can be obtained in real time, especially when the number of sample objects is large enough, the accuracy is expected It has reached a very high level.
  • it has a high level of intelligence, and can avoid human errors, reduce data interaction with medical staff equipment and data storage devices, and avoid patient privacy leakage.
  • a deep learning model can be obtained. Through the learning and tuning of this deep learning model, a network from input to output can be established. Although the functional relationship between input and output cannot be found 100%, it can approach the actual correlation as much as possible.
  • the target part classification model trained from this can realize automatic classification of target parts, and the classification results are reliable. high sex.
  • the present application may use the above training process to train the target part classification model, and in other implementations, the present application may use a pre-trained target part classification model.
  • the present application does not limit the training process of the target part classification model.
  • the above-mentioned supervised learning training method, semi-supervised learning training method, or unsupervised learning training method may be used.
  • the step S402 may include:
  • the present application does not limit the preset training end conditions, for example, it can be that the number of training times reaches the preset number of times (the preset number of times is, for example, 1 time, 3 times, 10 times, 100 times, 1000 times, 10000 times, etc.), or it can be It means that all the training data in the training set have completed one or more trainings, or it can be that the total loss value obtained in this training is not greater than the preset loss value.
  • FIG. 9 is a schematic flowchart of another video processing method provided by an embodiment of the present application.
  • the method may further include step S108: acquiring network bandwidth data corresponding to the target object.
  • the network bandwidth data may include at least one of the following: telecom operator, tariff package type, Mbps (megabits per second, the number of bits (bits) transmitted per second), modem model and router model.
  • the step S103 may include: based on the disease type of the target object and the network bandwidth data corresponding to the target object, acquiring the streaming strategy of the real-time video data.
  • the streaming strategy is not only related to the type of disease, but also takes into account the network bandwidth of the target object. Therefore, according to the actual situation of the network bandwidth of the target object's environment during the doctor-patient communication process, a differentiated first priority can be set.
  • the preset definition is more in line with the needs of practical applications.
  • FIG. 10 is a partial flowchart of another video processing method provided by the embodiment of the present application, and the method may further include steps S109-S111.
  • Step S109 Based on the disease type of the target object, obtain the display strategy of the streaming data to be pushed, the display strategy is used to indicate the preset size, preset position, preset brightness, preset contrast and preset saturation one or more of.
  • the preset size is, for example, 1000 pixels ⁇ 2000 pixels, 1000 pixels ⁇ 1000 pixels, 500 pixels ⁇ 200 pixels, etc.
  • the preset position is, for example, centered, left-centered, right-bottom, etc.
  • the preset brightness is, for example, -45, 23, 65, etc.
  • the preset contrast is, for example, -52, 56, 67, etc.
  • the preset saturation is, for example, -39, 35, 73, etc.
  • Step S110 Based on the display strategy, determine the data to be displayed corresponding to the streaming data to be pushed. For example, in the implementation of separately pushing the first part and the second part of the streaming data to be pushed, this step can synthesize a complete video picture (that is, the data to be displayed) according to the first part and the second part of the streaming data to be pushed and pass The display device shows the doctor.
  • this step can synthesize a complete video picture (that is, the data to be displayed) according to the first part and the second part of the streaming data to be pushed and pass The display device shows the doctor.
  • Step S111 Display the data to be displayed by using a display device.
  • the display device is, for example, an OLED display screen, an LED display screen, an ink screen, and the like.
  • differentiated display strategies can be set, and then the display strategy determines the data to be displayed corresponding to the data to be streamed, so that the display device can display the data to be displayed for different types of diseases in a differentiated display manner. Display data, and the degree of intelligence is further improved.
  • the embodiment of the present application does not limit the data to be displayed, which may be the data to be streamed itself, or may be video data obtained after data processing is performed on the data to be streamed.
  • Fig. 11 is a partial flowchart of another video processing method provided in the embodiment of the present application
  • Fig. 12 is a partial flowchart of another video processing method provided in the embodiment of the present application in some possible
  • the display strategy may be used to indicate the preset size.
  • the method may further include step S106: determining the target site of the target object based on the disease type of the target object.
  • the step S110 may include: based on the preset size, zooming the first part of the data to be streamed corresponding to the target part to obtain the data to be displayed, so that the target part is displayed on the
  • the size of the display device is not smaller than the preset size, and the first part of the data to be streamed corresponding to the target site is completely displayed on the display device.
  • part of the video data corresponding to the target part can be scaled, so that the size of the target part displayed on the display device is moderate, which further facilitates the doctor to observe the target part, and avoids the target part being too small or too large in the display device to affect the doctor
  • the observation of patients greatly improves the experience of doctors.
  • scaling may also be performed on the second part of the data to be streamed other than the target site.
  • the embodiment of the present application does not limit the display effect of the data to be displayed on the display device.
  • the display device can display part or all of the data to be displayed, preferably display the data to be displayed. Display all of the data.
  • data processing such as stretching, compression, proportional scaling, and translation can be performed on the data to be pushed to obtain the data to be displayed.
  • FIG. 13 is a schematic flowchart of acquiring data to be displayed according to an embodiment of the present application.
  • the display strategy may also be used to indicate the preset position.
  • the step S110 may include steps S501-S502.
  • Step S501 Based on the preset size, zoom the first part of the data to be streamed corresponding to the target part to obtain data to be translated.
  • Step S502 Translate the data to be translated based on the preset position to obtain the data to be displayed, so that the target part is displayed at a preset position on the display device.
  • the video data corresponding to the target part when the video data corresponding to the target part is close to the edge, it can be translated to make it centered.
  • the part of video data corresponding to the target part can be translated, so that the target part is displayed at a preset position in the display device, thereby setting a doctor's preference or customary preset position, and the doctor can conveniently display the preset position in the display device.
  • video data other than the target part may also be translated.
  • FIG. 14 is a schematic structural diagram of a video processing device provided by an embodiment of the present application.
  • the present application provides a video processing device, and the device includes: a video acquisition module 101, configured to acquire real-time video data.
  • the real-time video data is obtained by shooting the target object with a camera;
  • the disease type module 102 is used to obtain the disease type of the target object;
  • the streaming strategy module 103 is used to obtain the real-time video based on the disease type of the target object
  • a streaming strategy for data where the streaming strategy is used to indicate a first preset definition;
  • the data to be pushed module 104 is used to determine the streaming data to be pushed corresponding to the real-time video data based on the streaming strategy;
  • the data push module 105 is configured to push the data to be pushed to the doctor equipment.
  • FIG. 15 is a schematic structural diagram of another video processing device provided by an embodiment of the present application.
  • the device may further include: a target part module 106 for The type of disease is to determine the target part of the target object; the data to be streamed module 104 is configured to perform definition on the first part of the real-time video data corresponding to the target part based on the first preset definition Adjust to obtain the streaming data to be pushed.
  • Figure 16 is a schematic structural diagram of another video processing device provided in the embodiment of the present application
  • Figure 17 is a schematic structural diagram of a data module to be pushed provided in the embodiment of the present application
  • the device may further include: a definition acquisition module 107, configured to obtain a second preset definition, the second preset definition is smaller than the first preset definition
  • the data to be streamed Module 104 includes: a first streaming adjustment unit 201, configured to adjust the definition of the first part of the real-time video data corresponding to the target part based on the first preset definition, to obtain the to-be-pushed stream The first part of the data; the second streaming adjustment unit 202, configured to adjust the definition of the second part of the real-time video data based on the second preset definition, so as to obtain the second part of the streaming data to be pushed.
  • the second part of the real-time video data is part or all of the real-time video data other than the first part.
  • FIG. 18 is a schematic structural diagram of a data streaming module provided by an embodiment of the present application.
  • the data streaming module 105 may include: a composite streaming unit 301 , configured to The first part and the second part of the data to be pushed are synthesized to obtain the data to be pushed and pushed to the doctor device; or, respectively, the streaming unit 302 is used to convert the first part of the data to be pushed and the second part are respectively pushed to the doctor equipment.
  • FIG. 19 is a schematic structural diagram of a target part module provided by an embodiment of the present application.
  • the target part module 106 may include: a training data unit 401 for acquiring multiple sample objects The training data of each sample object includes the disease type and the target site of the sample object; the model training unit 402 is used to train the deep learning model using the training data of the multiple sample objects to obtain the target site classification model A type input unit 403, configured to input the disease type of the target object into the target site classification model to obtain the target site of the target object.
  • FIG. 20 is a schematic structural diagram of another video processing device provided by the embodiment of the present application.
  • the device may further include: a network bandwidth module 108, configured to obtain the corresponding network bandwidth data; the streaming policy module 103 is configured to obtain the streaming policy of the real-time video data based on the disease type of the target object and the network bandwidth data corresponding to the target object.
  • FIG. 21 is a partial structural diagram of another video processing device provided in an embodiment of the present application.
  • the device may further include: a display policy module 109 configured to type of disease, obtain the display strategy of the stream data to be pushed, and the display strategy is used to indicate one or more of preset size, preset position, preset brightness, preset contrast and preset saturation;
  • the data to be displayed module 110 is configured to determine the data to be displayed corresponding to the streaming data to be pushed based on the display strategy;
  • the data display module 111 is configured to display the data to be displayed by using a display device.
  • the display strategy may be used to indicate the preset size; the device may further include: a target site module 106, configured to determine the size of the target object based on the disease type of the target object.
  • Target part the data to be displayed module 110 is configured to scale the first part of the data to be streamed corresponding to the target part based on the preset size to obtain the data to be displayed, so that the target The size of the part displayed on the display device is not smaller than the preset size, and the first part of the data to be streamed corresponding to the target part is completely displayed on the display device.
  • Fig. 22 is a schematic structural diagram of a data module to be displayed provided by an embodiment of the present application.
  • the display strategy can also be used to indicate the preset position; the data to be displayed
  • the module 110 may include: a data scaling unit 501, configured to scale the first part of the data to be streamed corresponding to the target site based on the preset size, to obtain data to be translated; a data translation unit 502, configured to Based on the preset position, the data to be translated is translated to obtain the data to be displayed, so that the target part is displayed at a preset position on the display device.
  • FIG. 23 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the embodiment of the present application also provides an electronic device 200, and the electronic device 200 includes at least one memory 210, at least one processor 220, and a connection Bus 230 for different platform systems.
  • Memory 210 may include readable media in the form of volatile memory, such as random access memory (RAM) 211 and/or cache memory 212 , and may further include read only memory (ROM) 213 .
  • RAM random access memory
  • ROM read only memory
  • the memory 210 also stores a computer program, and the computer program can be executed by the processor 220, so that the processor 220 executes the steps of the video processing method in the embodiment of the present application.
  • the implementation mode and the achieved technical effect are the same, and part of the content will not be repeated.
  • Memory 210 may also include utility 214 having at least one program module 215 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, examples of each or Implementations of network environments may be included in some combination.
  • program module 215 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, examples of each or Implementations of network environments may be included in some combination.
  • the processor 220 can execute the above-mentioned computer program, and can execute the utility tool 214 .
  • Bus 230 may be representative of one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus structures.
  • the electronic device 200 can also communicate with one or more external devices 240 such as keyboards, pointing devices, Bluetooth devices, etc., and can also communicate with one or more devices capable of interacting with the electronic device 200, and/or communicate with the electronic device 200 200 is capable of communicating with any device (eg, router, modem, etc.) that communicates with one or more other computing devices. Such communication may occur through input-output interface 250 .
  • the electronic device 200 can also communicate with one or more networks (such as a local area network (LAN), a wide area network (WAN) and/or a public network such as the Internet) through the network adapter 260 .
  • the network adapter 260 can communicate with other modules of the electronic device 200 through the bus 230 . It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with electronic device 200, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives And data backup storage platform, etc.
  • the embodiment of the present application also provides a computer-readable storage medium, which is used to store a computer program.
  • a computer program When the computer program is executed, the steps of the video processing method in the embodiment of the present application are implemented.
  • the specific implementation method The implementation mode and the achieved technical effect are consistent with those described in the above-mentioned video processing method embodiment, and part of the content will not be repeated here.
  • Fig. 24 shows a program product 300 provided by this embodiment for realizing the above-mentioned video processing method, which can adopt a portable compact disk read-only memory (CD-ROM) and include program codes, and can be installed on a terminal device such as a personal computer run on.
  • the program product 300 of the present application is not limited thereto.
  • the readable storage medium may be any tangible medium containing or storing a program, and the program may be used by or in combination with an instruction execution system, device or device.
  • Program product 300 may utilize any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM compact disk read-only memory
  • optical storage devices magnetic storage devices, or any suitable combination of the foregoing.
  • a computer readable storage medium may include a data signal carrying readable program code in baseband or as part of a carrier wave traveling as part of a data signal. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a readable storage medium may also be any readable medium that can transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the program code contained on the readable storage medium can be transmitted by any appropriate medium, including but not limited to wireless, cable, optical cable, RF, etc., or any suitable combination of the above.
  • the program code for performing the operation of the present application can be written in any combination of one or more programming languages, and the programming language includes object-oriented programming languages such as Java, C++, etc., and also includes conventional procedural programming languages A programming language such as C or similar.
  • the program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server to execute.
  • the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device (e.g., using an Internet service provider). business to connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • Internet service provider e.g., a wide area network

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Primary Health Care (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Pathology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

一种视频处理方法、装置、电子设备及计算机可读存储介质,方法包括:获取实时视频数据,实时视频数据是摄像头拍摄目标对象得到的(S101);获取目标对象的疾病类型(S102);基于目标对象的疾病类型,获取实时视频数据的推流策略,推流策略用于指示第一预设清晰度(S103);基于推流策略,确定实时视频数据对应的待推流数据(S104);将待推流数据推流至医生设备(S105)。推流策略与疾病类型相对应,能够为不同的疾病类型设置不同的第一预设清晰度,基于目标对象的疾病类型对实时视频数据进行视频数据差异化推送,在医患进行远程实时视频通话的过程中,满足医生对不同疾病类型患者进行差异化观察的需求,智能化水平高。

Description

视频处理方法、装置、电子设备及计算机可读存储介质
本申请要求于2021年7月20日提交的申请号为202110820705.8的中国专利的优先权,上述中国专利通过全文引用的形式并入。
技术领域
本申请涉及图像处理技术领域,尤其涉及视频处理方法、装置、电子设备及计算机可读存储介质。
背景技术
随着“互联网+医疗”的发展,已经有医疗机构为患者提供远程沟通服务,患者足不出户即可与医生进行视频通话,这种交互方式能够为患者带来近似于传统面对面就医的感受,而医生也可以通过视频画面了解到患者的精神状态,甚至直接观测患者的某项生理特征,例如观察帕金森患者的肢体抖动情况。
患者的疾病类型不同,医生与患者建立远程连接时,视频通话过程中所需要的清晰度不同。举例来说,医生需要观察帕金森患者抖动情况,因此需要设置较高清晰度;而情绪类疾病需要的清晰度相对较低,设置高清晰度的情况下,势必会对网络带宽带来考验。
发明内容
本申请的目的在于提供一种视频处理方法、装置、电子设备及计算机可读存储介质,解决现有技术在医患双方沟通时,无法根据疾病类型进行视频数据清晰度差异化推流的问题。
第一方面,本申请提供了一种视频处理方法,所述方法包括:获取实时视频数据,所述实时视频数据是摄像头拍摄目标对象得到的;获取所述目标对象的疾病类型;基于所述目标对象的疾病类型,获取所述实时视频数据的推流策略,所述推流策略用于指示第一预设清晰度;基于所述推流策略,确定所述实时视频数据对应的待推流数据;将所述待推流数据推流至医生设备。
在一种可能的实现方式中,所述方法还包括:基于所述目标对象的疾病类型,确定所述目标对象的目标部位;所述基于所述推流策略,确定所述实时视频数据对应的待推流数 据,包括:基于所述第一预设清晰度,对所述目标部位对应的所述实时视频数据的第一部分进行清晰度调整,得到所述待推流数据。
在一种可能的实现方式中,所述方法还包括:获取第二预设清晰度,所述第二预设清晰度小于所述第一预设清晰度;所述基于所述第一预设清晰度,对所述目标部位对应的所述实时视频数据的第一部分进行清晰度调整,得到所述待推流数据,包括:基于所述第一预设清晰度,对所述目标部位对应的所述实时视频数据的第一部分进行清晰度调整,得到所述待推流数据的第一部分;基于所述第二预设清晰度,对所述实时视频数据的第二部分进行清晰度调整,得到所述待推流数据的第二部分,所述实时视频数据的第二部分是所述实时视频数据中第一部分以外的部分或者全部。
在一种可能的实现方式中,所述将所述待推流数据推流至医生设备,包括:基于所述待推流数据的第一部分和第二部分,合成得到所述待推流数据并推流至所述医生设备;或者,将所述待推流数据的第一部分和第二部分分别推流至所述医生设备。
在一种可能的实现方式中,所述基于所述目标对象的疾病类型,确定所述目标对象的目标部位,包括:获取多个样本对象的训练数据,每个样本对象的训练数据包括所述样本对象的疾病类型和目标部位;利用所述多个样本对象的训练数据训练深度学习模型,得到目标部位分类模型;将所述目标对象的疾病类型输入所述目标部位分类模型,得到所述目标对象的目标部位。
在一种可能的实现方式中,所述方法还包括:获取所述目标对象对应的网络带宽数据;所述基于所述目标对象的疾病类型,获取所述实时视频数据的推流策略,包括:基于所述目标对象的疾病类型和所述目标对象对应的网络带宽数据,获取所述实时视频数据的推流策略。
在一种可能的实现方式中,所述方法还包括:基于所述目标对象的疾病类型,获取所述待推流数据的显示策略,所述显示策略用于指示预设尺寸、预设位置、预设亮度、预设对比度和预设饱和度中的一种或多种;基于所述显示策略,确定所述待推流数据对应的待显示数据;利用显示设备显示所述待显示数据。
在一种可能的实现方式中,所述显示策略用于指示所述预设尺寸;所述方法还包括:基于所述目标对象的疾病类型,确定所述目标对象的目标部位;所述基于所述显示策略,确定所述待推流数据对应的待显示数据,包括:基于所述预设尺寸,对所述目标部位对应 的所述待推流数据的第一部分进行缩放,得到所述待显示数据,以使所述目标部位显示在所述显示设备中的尺寸不小于所述预设尺寸,且使所述目标部位对应的所述待推流数据的第一部分完整显示在所述显示设备中。
在一种可能的实现方式中,所述显示策略还用于指示所述预设位置;所述基于所述预设尺寸,对所述目标部位对应的所述待推流数据的第一部分进行缩放,得到所述待显示数据,包括:基于所述预设尺寸,对所述目标部位对应的所述待推流数据的第一部分进行缩放,得到待平移数据;基于所述预设位置,对所述待平移数据进行平移,得到所述待显示数据,以使所述目标部位显示在所述显示设备中的预设位置。
第二方面,本申请提供了一种视频处理装置,所述装置包括:视频获取模块,用于获取实时视频数据,所述实时视频数据是摄像头拍摄目标对象得到的;疾病类型模块,用于获取所述目标对象的疾病类型;推流策略模块,用于基于所述目标对象的疾病类型,获取所述实时视频数据的推流策略,所述推流策略用于指示第一预设清晰度;待推流数据模块,用于基于所述推流策略,确定所述实时视频数据对应的待推流数据;数据推流模块,用于将所述待推流数据推流至医生设备。
在一种可能的实现方式中,所述装置还包括:目标部位模块,用于基于所述目标对象的疾病类型,确定所述目标对象的目标部位;所述待推流数据模块用于基于所述第一预设清晰度,对所述目标部位对应的所述实时视频数据的第一部分进行清晰度调整,得到所述待推流数据。
在一种可能的实现方式中,所述装置还包括:清晰度获取模块,用于获取第二预设清晰度,所述第二预设清晰度小于所述第一预设清晰度;所述待推流数据模块包括:第一推流调整单元,用于基于所述第一预设清晰度,对所述目标部位对应的所述实时视频数据的第一部分进行清晰度调整,得到所述待推流数据的第一部分;第二推流调整单元,用于基于所述第二预设清晰度,对所述实时视频数据的第二部分进行清晰度调整,得到所述待推流数据的第二部分,所述实时视频数据的第二部分是所述实时视频数据中第一部分以外的部分或者全部。
在一种可能的实现方式中,所述数据推流模块包括:合成推流单元,用于基于所述待推流数据的第一部分和第二部分,合成得到所述待推流数据并推流至所述医生设备;或者,分别推流单元,用于将所述待推流数据的第一部分和第二部分分别推流至所述医生设备。
在一种可能的实现方式中,所述目标部位模块包括:训练数据单元,用于获取多个样本对象的训练数据,每个样本对象的训练数据包括所述样本对象的疾病类型和目标部位;模型训练单元,用于利用所述多个样本对象的训练数据训练深度学习模型,得到目标部位分类模型;类型输入单元,用于将所述目标对象的疾病类型输入所述目标部位分类模型,得到所述目标对象的目标部位。
在一种可能的实现方式中,所述装置还包括:网络带宽模块,用于获取所述目标对象对应的网络带宽数据;所述推流策略模块用于基于所述目标对象的疾病类型和所述目标对象对应的网络带宽数据,获取所述实时视频数据的推流策略。
在一种可能的实现方式中,所述装置还包括:显示策略模块,用于基于所述目标对象的疾病类型,获取所述待推流数据的显示策略,所述显示策略用于指示预设尺寸、预设位置、预设亮度、预设对比度和预设饱和度中的一种或多种;待显示数据模块,用于基于所述显示策略,确定所述待推流数据对应的待显示数据;数据显示模块,用于利用显示设备显示所述待显示数据。
在一种可能的实现方式中,所述显示策略用于指示所述预设尺寸;所述装置还包括:目标部位模块,用于基于所述目标对象的疾病类型,确定所述目标对象的目标部位;所述待显示数据模块用于基于所述预设尺寸,对所述目标部位对应的所述待推流数据的第一部分进行缩放,得到所述待显示数据,以使所述目标部位显示在所述显示设备中的尺寸不小于所述预设尺寸,且使所述目标部位对应的所述待推流数据的第一部分完整显示在所述显示设备中。
在一种可能的实现方式中,所述显示策略还用于指示所述预设位置;所述待显示数据模块包括:数据缩放单元,用于基于所述预设尺寸,对所述目标部位对应的所述待推流数据的第一部分进行缩放,得到待平移数据;数据平移单元,用于基于所述预设位置,对所述待平移数据进行平移,得到所述待显示数据,以使所述目标部位显示在所述显示设备中的预设位置。
第三方面,本申请提供了一种电子设备,所述电子设备包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述任一项方法的步骤。
第四方面,本申请提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述任一项方法的步骤。
采用本申请提供的视频处理方法、装置、电子设备及计算机可读存储介质,至少具有以下优点:
利用摄像头拍摄目标对象得到实时视频数据,针对目标对象的疾病类型,获取相应的实时视频数据的推流策略,基于该推流策略确定实时视频数据对应的待推流数据并推流至医生设备,其中,推流策略与疾病类型相对应,能够为不同的疾病类型设置不同的第一预设清晰度,基于目标对象的疾病类型对实时视频数据进行视频数据差异化推送,当疾病类型对应较低的第一预设清晰度时以较低清晰度进行数据推流、尽量少占用带宽,当疾病类型对应较高的第一预设清晰度时再以较高清晰度进行数据推流,在医患进行远程实时视频通话的过程中,满足医生对不同疾病类型患者进行差异化观察的需求,智能化水平高,例如可以为帕金森的疾病类型设置较高的第一预设清晰度,为抑郁症的疾病类型设置较低的第一预设清晰度,由此在尽量少占用带宽的情况下,同时满足医生清晰地观察患者病情的需求。
附图说明
下面结合附图和实施例对本申请进一步说明。
图1是本申请实施例提供的一种视频处理方法的流程示意图;
图2是本申请实施例提供的一种患者端视频处理方法的流程示意图;
图3是本申请实施例提供的一种医生端视频处理方法的流程示意图;
图4是本申请实施例提供的另一种视频处理方法的流程示意图;
图5是本申请实施例提供的又一种视频处理方法的流程示意图;
图6是本申请实施例提供的一种获取待推流数据的流程示意图;
图7是本申请实施例提供的一种对待推流数据进行推流的流程示意图;
图8是本申请实施例提供的一种获取目标对象的目标部位的流程示意图;
图9是本申请实施例提供的又一种视频处理方法的流程示意图;
图10是本申请实施例提供的又一种视频处理方法的部分流程示意图;
图11是本申请实施例提供的又一种视频处理方法的部分流程示意图;
图12是本申请实施例提供的又一种视频处理方法的部分流程示意图;
图13是本申请实施例提供的一种获取待显示数据的流程示意图;
图14是本申请实施例提供的一种视频处理装置的结构示意图;
图15是本申请实施例提供的另一种视频处理装置的结构示意图;
图16是本申请实施例提供的又一种视频处理装置的结构示意图;
图17是本申请实施例提供的一种待推流数据模块的结构示意图;
图18是本申请实施例提供的一种数据推流模块的结构示意图;
图19是本申请实施例提供的一种目标部位模块的结构示意图;
图20是本申请实施例提供的又一种视频处理装置的结构示意图;
图21是本申请实施例提供的又一种视频处理装置的部分结构示意图;
图22是本申请实施例提供的一种待显示数据模块的结构示意图;
图23是本申请实施例提供的一种电子设备的结构示意图;
图24是本申请实施例提供的一种用于实现视频处理方法的程序产品的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
在本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,a和b,a和c,b和c或a和b和c,其中a、b和c可以是单个,也可以是多个。值得注意的是,“至少一项(个)”还可以解释成“一项(个)或多项(个)”。
需要说明的是,在不相冲突的前提下,以下描述的各实施例之间或各技术特征之间可以任意组合形成新的实施例。
本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
参见图1至图3,图1是本申请实施例提供的一种视频处理方法的流程示意图,图2是本申请实施例提供的一种患者端视频处理方法的流程示意图,图3是本申请实施例提供的一种医生端视频处理方法的流程示意图。本申请实施例提供了一种视频处理方法,所述方法包括步骤S101~S105。
步骤S101:获取实时视频数据,所述实时视频数据是摄像头拍摄目标对象得到的。其中,摄像头例如可以包括光学摄像头和/或红外摄像头。目标对象一般而言是患者,例如是患有帕金森病、抑郁症、双相情感障碍的患者,还可以是患有其他疾病的患者,此处不做穷尽列举,一般而言,只要是需要医生对患者进行观察的病症均在本申请实施例所适用的范围之内。
步骤S102:获取所述目标对象的疾病类型。目标对象的疾病类型例如可以包括帕金森病、抑郁症、双相情感障碍中的至少一种。
步骤S103:基于所述目标对象的疾病类型,获取所述实时视频数据的推流策略,所述推流策略用于指示第一预设清晰度。本申请实施例中涉及到的清晰度,例如第一预设清晰度、第二预设清晰度等名称,是表征视频数据清晰程度的指标,在一般情况下,可以等同于分辨率处理,第一预设清晰度例如是2000像素×3000像素、960像素×540像素、1920像素×1080像素等,第二预设清晰度例如是2000像素×3000像素、960像素×540像素、1920像素×1080像素等。
步骤S104:基于所述推流策略,确定所述实时视频数据对应的待推流数据。此处待推流数据是指等待推流到医生设备的视频数据,其可以是实时视频数据本身,也可以是对实时视频数据进行数据处理后得到的视频数据。
步骤S105:将所述待推流数据推流至医生设备。
其中步骤S105可以包括:将所述待推流数据推流至服务器,以使所述服务器将所述待推流设备推送至医生设备。
其中,医生设备是指医生所使用的终端设备,例如是手机、平板电脑、计算机、智能穿戴设备等。所述医生设备用于显示待推流数据。在一些可能的实现方式中,医生设备还用于对患者进行远程程控。其中,远程程控是指医生和患者不处于同一空间的程控,例如医生在医院、患者在家中。
其中,推流是指把采集阶段封包好的内容传输到服务器的过程,推流的作用包括把数 据传输到服务器,通过服务器传输至医生设备,如果不推流,医生设备就无法显示相应画面。
其中,医生设备可以设置有一个或多个显示屏。当医生设备设置有多个显示屏时,多个显示屏可以排列为M行N列的形状,且紧密贴合、形成一个平面或者近似于平面的显示区域,以使用户在观看医生设备时无法分辨出多个显示屏的缝隙,而是像观看一个整体显示屏一样。在一个实现方式中,多个显示屏的形状、结构相同,且排列为4行6列的形状。在一个实现方式中,多个显示屏的形状、结构相同,且排列为3行3列的形状。
对于使用医生设备的医生,可以拉流实时显示患者的视频图像。拉流例如是指服务器已有直播内容,用指定地址进行拉取的过程。
由此,利用摄像头拍摄目标对象得到实时视频数据,针对目标对象的疾病类型,获取相应的实时视频数据的推流策略,基于该推流策略确定实时视频数据对应的待推流数据并推流至医生设备,其中,推流策略与疾病类型相对应,能够为不同的疾病类型设置不同的第一预设清晰度,基于目标对象的疾病类型对实时视频数据进行视频数据差异化推送,当疾病类型对应较低的第一预设清晰度时以较低清晰度进行数据推流、尽量少占用带宽,当疾病类型对应较高的第一预设清晰度时再以较高清晰度进行数据推流,在医患进行远程实时视频通话的过程中,满足医生对不同疾病类型患者进行差异化观察的需求,智能化水平高,例如可以为帕金森的疾病类型设置较高的第一预设清晰度,为抑郁症的疾病类型设置较低的第一预设清晰度,由此在尽量少占用带宽的情况下,同时满足医生清晰地观察患者病情的需求。
本申请实施例对待推流数据不做限定,其可以是仅对实时视频数据的一部分进行清晰度调整后得到的,也可以是对实时视频数据的全部进行清晰度调整后得到的。
在一些可能的方式中,待推流数据推流至医生设备后,医生设备可以实时显示患者的视频图像,医生利用植入式神经刺激系统对患者进行治疗。
植入式神经刺激系统主要包括植入体内的刺激器以及体外的程控设备。现有的神经调控技术主要是通过立体定向手术在体内特定结构(即靶点)植入电极,并由植入患者体内的刺激器经电极向靶点发放电脉冲,调控相应神经结构和网络的电活动及其功能,从而改善症状、缓解病痛。其中,刺激器可以是植入式神经电刺激装置、植入式心脏电刺激系统 (又称心脏起搏器)、植入式药物输注装置(Implantable Drug Delivery System,简称I DDS)和导线转接装置中的任意一种。植入式神经电刺激装置例如是脑深部电刺激系统(Deep Brain Stimulation,简称DBS)、植入式脑皮层刺激系统(Cortical Nerve Stimulation,简称CNS)、植入式脊髓电刺激系统(Spinal Cord Stimulation,简称SCS)、植入式骶神经电刺激系统(Sacral Nerve Stimulation,简称SNS)、植入式迷走神经电刺激系统(Va gus Nerve Stimulation,简称VNS)等。
刺激器可以包括IPG、延伸导线和电极导线,IPG(implantable pulse generator,植入式脉冲发生器)设置于患者体内,依靠密封电池和电路向生物体组织提供可控制的电刺激能量,通过植入的延伸导线和电极导线,为生物体组织的特定区域提供一路或两路可控制的特定电刺激能量。延伸导线配合IPG使用,作为电刺激信号的传递媒体,将IPG产生的电刺激信号,传递给电极导线。电极导线将IPG产生的电刺激信号,通过多个电极触点,向生物体组织的特定区域释放电刺激能量;所述植入式医疗设备具有单侧或双侧的一路或多路电极导线,所述电极导线上设置有多个电极触点,所述电极触点可以均匀排列或者非均匀排列在电极导线的周向上。作为一个示例,所述电极触点以4行3列的阵列(共计12个电极触点)排列在电极导线的周向上。电极触点可以包括刺激电极触点和/或采集电极触点。电极触点例如可以采用片状、环状、点状等形状。
在一些可能的实现方式中,受刺激的生物体组织可以是患者的脑组织,受刺激的部位可以是脑组织的特定部位。当患者的疾病类型不同时,受刺激的部位一般来说是不同的,所使用的刺激触点(单源或多源)的数量、一路或多路(单通道或多通道)特定电刺激信号的运用以及刺激参数数据也是不同的。本申请对适用的疾病类型不做限定,其可以是脑深部刺激(DBS)、脊髓刺激(SCS)、骨盆刺激、胃刺激、外周神经刺激、功能性电刺激所适用的疾病类型。其中,DBS可以用于治疗或管理的疾病类型包括但不限于:痉挛疾病(例如,癫痫)、疼痛、偏头痛、精神疾病(例如,重度抑郁症(MDD))、躁郁症、焦虑症、创伤后压力心理障碍症、轻郁症、强迫症(OCD)、行为障碍、情绪障碍、记忆障碍、心理状态障碍、移动障碍(例如,特发性震颤或帕金森氏病)、亨廷顿病、阿尔茨海默症、药物成瘾症、自闭症或其他神经学或精神科疾病和损害。当DBS用于治疗药物成瘾症患者时,可以帮助吸毒人员戒毒,提升他们的幸福感和生命质量。
本申请中的刺激器以脑深部刺激器(DBS)为例进行阐述,程控设备和刺激器建立程 控连接时,可以利用程控设备调整刺激器的电刺激信号的刺激参数,也可以通过刺激器感测患者脑深部的生物电活动,并可以通过所感测到的生物电活动来继续调节刺激器的电刺激信号的刺激参数。电刺激信号的刺激参数可以包括频率(例如是单位时间1s内的电刺激脉冲信号个数,单位为Hz)、脉宽(每个脉冲的持续时间,单位为μs)和幅值(一般用电压表述,即每个脉冲的强度,单位为V)中的任意一种或多种。在具体应用中,可以在电流模式或者电压模式下对刺激器的各刺激参数进行调节(实现对患者的精细化治疗)。
参见图4,图4是本申请实施例提供的另一种视频处理方法的流程示意图,在一些可能的方式中,所述方法还可以包括步骤S106:基于所述目标对象的疾病类型,确定所述目标对象的目标部位。其中,目标部位例如是面部、眼睛、鼻子、嘴巴、耳朵、手指、手臂、脚、腿、背部等。
所述步骤S104可以包括:基于所述第一预设清晰度,对所述目标部位对应的所述实时视频数据的第一部分进行清晰度调整,得到所述待推流数据。
由此,目标部位是医生较为关心的能够反应患者病情的身体部位,通过目标对象的疾病类型确定其目标部位,对目标部位相应的一部分实时视频数据进行清晰度调整得到待推流数据,由此,待推流数据中目标部位的清晰度是第一预设清晰度,可以预先设置合适的第一预设清晰度,保证医生关心的目标部位能够以需要的清晰度呈现。
参见图5和图6,图5是本申请实施例提供的又一种视频处理方法的流程示意图,图6是本申请实施例提供的一种获取待推流数据的流程示意图,在一些可能的方式中,所述方法还可以包括步骤S107:获取第二预设清晰度,所述第二预设清晰度小于所述第一预设清晰度。例如,当第一预设清晰度是2000像素×3000像素时,第二预设清晰度可以为1000像素×2000像素、2000像素×1500像素、1000像素×3000像素等。
所述步骤S104可以包括步骤S201~S202。
步骤S201:基于所述第一预设清晰度,对所述目标部位对应的所述实时视频数据的第一部分进行清晰度调整,得到所述待推流数据的第一部分。
步骤S202:基于所述第二预设清晰度,对所述实时视频数据的第二部分进行清晰度调整,得到所述待推流数据的第二部分,所述实时视频数据的第二部分是所述实时视频数 据中第一部分以外的部分或者全部。
当目标部分为手指时,目标部位对应的实时视频数据的第一部分例如是包含手指的视频数据(可以是单独的一个手指,也可以是多个手指,也可以是多个手指+手掌),此时,第二部分即实时视频数据中第一部分以外的部分或者全部是不包含手指的视频数据(例如是抹除整个人像后的背景部分,或者抹除患者手指后的人像部分以及背景部分)。
当目标部分为眼睛时,目标部位对应的实时视频数据的第一部分例如是包含眼睛的视频数据(可以是单独的双眼,也可以是整个面部),此时,第二部分即实时视频数据中第一部分以外的部分或者全部是不包含眼睛的视频数据(例如是抹除人像后的背景部分,或者抹除患者眼睛后的人像部分以及背景部分)。
上述实时视频数据的第一部分和第二部分可以从原始的实时视频数据中截取得到,并分别进行清晰度调整以得到待推流数据的第一部分和第二部分,且待推流数据的第一部分的清晰度高于待推流数据的第二部分。
由此,医生通常对目标部位以外部分的视频数据的清晰度没有较高要求,因此,对目标部位对应的实时视频数据的第一部分和第一部分以外的第二部分的部分或者全部分别进行差异化的清晰度调整,当医生远程观察患者时,目标部位以及目标部位以外的部分能够以不同的清晰度呈现,其中,目标部位能够以较高的清晰度呈现,保证医生清晰观察患者的需求,目标部位以外部分能够以较低清晰度呈现,减少数据推流过程的数据量,进一步减少对带宽的占用,相较于现有技术来说,智能化程度得到极大提高。
当待推流数据包含两部分数据时,本申请实施例对其推流方式不做限定,可以对两部分数据进行合成推流或者分别推流。
参见图7,图7是本申请实施例提供的一种对待推流数据进行推流的流程示意图,在一些可能的方式中,所述步骤S105可以包括步骤S301或者S302。
步骤S301:基于所述待推流数据的第一部分和第二部分,合成得到所述待推流数据并推流至所述医生设备。
步骤S302:将所述待推流数据的第一部分和第二部分分别推流至所述医生设备。
在一个可能的实现方式中,步骤S302可以包括:将待推流数据的第一部分和第二部分分别推流至服务器,以使服务器将待推流数据的第一部分和第二部分分别推送至医生设 备,从而令医生设备根据待推流数据的第一部分和第二部分合成得到完整的视频画面并显示给医生。这样做的好处是,相对于将待推流数据作为一个整体采用较高清晰度进行数据传输来说,将待推流数据的第二部分以较低的清晰度进行数据传输,大大减少了传输过程(先推流至服务器,再由服务器推送至医生设备)的数据总量(因为手指、眼睛等部位通常占整个实时视频数据的比例较小,实时视频数据的第二部分通常比第一部分的数据量大很多),提高了数据推送效率,并且降低了单位时间内的数据下载量,降低医生设备产生卡顿的概率,医生可以流畅地观看患者的视频画面,且这种视频画面中医生所关心的部位具有高清晰度的显示效果,以使医生能够清晰观察这些部位,同时其他部位具有较低清晰度的显示效果,从而产生强烈的对比效果,使得医生可以把注意力更多地放在所关心的部位上,更专注地观察这些部位的情况,从技术手段上有力地提高医生的治疗效果。
由此,目标部位对应的待推流数据的第一部分和目标部位以外部分对应的待推流数据的第二部分可以合成得到待推流数据后进行推流,也可以分别推流,由此,可以根据实际应用中的性能需求和成本需求,选择合适的推流方式。
本申请实施例对获取目标对象的目标部位的方式不做限定,其可以是医生通过医生设备人工手动录入,也可以是通过数据接口导入(或者说读取)数据库中的数据,还可以使用深度学习技术来获取。
参见图8,图8是本申请实施例提供的一种获取目标对象的目标部位的流程示意图,在一些可能的方式中,所述步骤S106可以包括步骤S401~S403。
步骤S401:获取多个样本对象的训练数据,每个样本对象的训练数据包括所述样本对象的疾病类型和目标部位。其中,样本对象的训练数据可以是采集真人患者得到的真实数据,也可以是通过人工智能算法生成的伪真人数据。
步骤S402:利用所述多个样本对象的训练数据训练深度学习模型,得到目标部位分类模型。
步骤S403:将所述目标对象的疾病类型输入所述目标部位分类模型,得到所述目标对象的目标部位。
由此,对深度学习模型进行训练得到目标部位分类模型,只要将目标对象的疾病类型输入目标部位分类模型,即可实时获取目标对象的目标部位,尤其是当样本对象数量足够 多时,准确度有望达到极高水平,相对于手动录入目标部位或者导入目标部位的方式来说,智能化水平高,并且能够避免人为失误,减少与医护人员设备、数据存储设备之间的数据交互,避免患者隐私泄露。通过设计,建立适量的神经元计算节点和多层运算层次结构,选择合适的输入层和输出层,就可以得到深度学习模型,通过该深度学习模型的学习和调优,建立起从输入到输出的函数关系,虽然不能100%找到输入与输出的函数关系,但是可以尽可能地逼近现实的关联关系,由此训练得到的目标部位分类模型,可以实现对目标部位的自动分类,且分类结果可靠性高。
在一些实施方式中,本申请可以采用上述训练过程训练得到目标部位分类模型,在另一些实施方式中,本申请可以采用预先训练好的目标部位分类模型。
本申请对目标部位分类模型的训练过程不作限定,其例如可以采用上述监督学习的训练方式,或者可以采用半监督学习的训练方式,或者可以采用无监督学习的训练方式。
所述步骤S402可以包括:
根据所述多个样本对象的训练数据,更新所述深度学习模型的模型参数;
检测是否满足预设的训练结束条件,如果是,则停止训练,并将训练得到的所述深度学习模型作为所述代谢指数模型,如果否,则利用下一个样本对象的训练数据训练所述深度学习模型。
本申请对预设的训练结束条件不作限定,其例如可以是训练次数达到预设次数(预设次数例如是1次、3次、10次、100次、1000次、10000次等),或者可以是训练集中的训练数据都完成一次或多次训练,或者可以是本次训练得到的总损失值不大于预设损失值。
参见图9,图9是本申请实施例提供的又一种视频处理方法的流程示意图,在一些可能的方式中,所述方法还可以包括步骤S108:获取所述目标对象对应的网络带宽数据。其中,网络带宽数据例如可以包括以下至少一种:电信运营商、资费套餐类型、Mbps(megabits per second,每秒传输的位(比特)数量)、调制解调器型号和路由器型号。
所述步骤S103可以包括:基于所述目标对象的疾病类型和所述目标对象对应的网络带宽数据,获取所述实时视频数据的推流策略。
由此,推流策略不仅与疾病类型相关,还考虑到目标对象的网络带宽情况,由此,能够根据医患沟通过程中目标对象所处环境的网络带宽的实际情况,设置差异化的第一预设 清晰度,更加符合实际应用中的需求。
参见图10,在一些可能的方式中,图10是本申请实施例提供的又一种视频处理方法的部分流程示意图,所述方法还可以包括步骤S109~S111。
步骤S109:基于所述目标对象的疾病类型,获取所述待推流数据的显示策略,所述显示策略用于指示预设尺寸、预设位置、预设亮度、预设对比度和预设饱和度中的一种或多种。其中,预设尺寸例如是1000像素×2000像素、1000像素×1000像素、500像素×200像素等;预设位置例如是居中、左居中、右下等;预设亮度例如是-45、23、65等;预设对比度例如是-52、56、67等;预设饱和度例如是-39、35、73等。
步骤S110:基于所述显示策略,确定所述待推流数据对应的待显示数据。例如在将待推流数据的第一部分和第二部分分别推送的实现方式中,该步骤能够根据待推流数据的第一部分和第二部分合成得到完整的视频画面(即待显示数据)并通过显示设备显示给医生。
步骤S111:利用显示设备显示所述待显示数据。其中,显示设备例如是OLED显示屏、LED显示屏、墨水屏等。
由此,根据疾病类型的不同,能够设置差异化的显示策略,再由显示策略确定待推流数据对应的待显示数据,由此,显示设备能够以差异化的显示方式显示不同疾病类型的待显示数据,智能化程度得到进一步提高。
本申请实施例对待显示数据不做限定,其可以是待推流数据自身,也可以是对待推流数据进行数据处理后得到的视频数据。
参见图11和图12,图11是本申请实施例提供的又一种视频处理方法的部分流程示意图,图12是本申请实施例提供的又一种视频处理方法的部分流程示意图在一些可能的方式中,所述显示策略可以用于指示所述预设尺寸。
所述方法还可以包括步骤S106:基于所述目标对象的疾病类型,确定所述目标对象的目标部位。
所述步骤S110可以包括:基于所述预设尺寸,对所述目标部位对应的所述待推流数据的第一部分进行缩放,得到所述待显示数据,以使所述目标部位显示在所述显示设备中 的尺寸不小于所述预设尺寸,且使所述目标部位对应的所述待推流数据的第一部分完整显示在所述显示设备中。
当目标部位对应的待推流数据的第一部分的尺寸较小时,可以对其进行放大;当目标部位对应的待推流数据的第一部分的尺寸较大时,可以对其进行缩小。
由此,能够对目标部位对应的部分视频数据进行缩放,使得目标部位显示在显示设备中的尺寸适中,进一步方便了医生对目标部位进行观察,避免显示设备中目标部位太小或者太大影响医生对患者的观察,极大地提高了医生的使用体验。
另外,本申请实施例还可以对目标部位以外的待推流数据的第二部分进行缩放。
本申请实施例对待显示数据在显示设备中的显示效果不做限定,当待显示数据的尺寸与显示设备的分辨率不匹配时,显示设备可以显示待显示数据的部分或者全部,优选是显示待显示数据的全部。另外,为了提高显示设备的显示效果,可以对待推流数据进行拉伸、压缩、等比例缩放、平移等数据处理,得到待显示数据。
参见图13,图13是本申请实施例提供的一种获取待显示数据的流程示意图,在一些可能的方式中,所述显示策略还可以用于指示所述预设位置。
所述步骤S110可以包括步骤S501~S502。
步骤S501:基于所述预设尺寸,对所述目标部位对应的所述待推流数据的第一部分进行缩放,得到待平移数据。
步骤S502:基于所述预设位置,对所述待平移数据进行平移,得到所述待显示数据,以使所述目标部位显示在所述显示设备中的预设位置。
例如当目标部位对应的视频数据靠近边缘时,可以对其进行平移使其居中。
由此,能够对目标部位对应的部分视频数据进行平移,使得目标部位显示在显示设备中的预设位置,由此,设置医生偏好或者习惯的预设位置,医生能够方便地在显示设备的预设位置观察患者,避免目标部位显示在医生不习惯或者不喜欢的位置,影响医生的使用体验。
另外,本申请实施例还可以对目标部位以外的视频数据进行平移。
参见图14,图14是本申请实施例提供的一种视频处理装置的结构示意图,本申请提 供了一种视频处理装置,所述装置包括:视频获取模块101,用于获取实时视频数据,所述实时视频数据是摄像头拍摄目标对象得到的;疾病类型模块102,用于获取所述目标对象的疾病类型;推流策略模块103,用于基于所述目标对象的疾病类型,获取所述实时视频数据的推流策略,所述推流策略用于指示第一预设清晰度;待推流数据模块104,用于基于所述推流策略,确定所述实时视频数据对应的待推流数据;数据推流模块105,用于将所述待推流数据推流至医生设备。
参见图15,图15是本申请实施例提供的另一种视频处理装置的结构示意图,在一些可能的方式中,所述装置还可以包括:目标部位模块106,用于基于所述目标对象的疾病类型,确定所述目标对象的目标部位;所述待推流数据模块104用于基于所述第一预设清晰度,对所述目标部位对应的所述实时视频数据的第一部分进行清晰度调整,得到所述待推流数据。
参见图16和图17,图16是本申请实施例提供的又一种视频处理装置的结构示意图,图17是本申请实施例提供的一种待推流数据模块的结构示意图,在一些可能的方式中,所述装置还可以包括:清晰度获取模块107,用于获取第二预设清晰度,所述第二预设清晰度小于所述第一预设清晰度;所述待推流数据模块104包括:第一推流调整单元201,用于基于所述第一预设清晰度,对所述目标部位对应的所述实时视频数据的第一部分进行清晰度调整,得到所述待推流数据的第一部分;第二推流调整单元202,用于基于所述第二预设清晰度,对所述实时视频数据的第二部分进行清晰度调整,得到所述待推流数据的第二部分,所述实时视频数据的第二部分是所述实时视频数据中第一部分以外的部分或者全部。
参见图18,图18是本申请实施例提供的一种数据推流模块的结构示意图,在一些可能的方式中,所述数据推流模块105可以包括:合成推流单元301,用于基于所述待推流数据的第一部分和第二部分,合成得到所述待推流数据并推流至所述医生设备;或者,分别推流单元302,用于将所述待推流数据的第一部分和第二部分分别推流至所述医生设备。
参见图19,图19是本申请实施例提供的一种目标部位模块的结构示意图,在一些可能的方式中,所述目标部位模块106可以包括:训练数据单元401,用于获取多个样本对象的训练数据,每个样本对象的训练数据包括所述样本对象的疾病类型和目标部位;模型训练单元402,用于利用所述多个样本对象的训练数据训练深度学习模型,得到目标部位分类模型;类型输入单元403,用于将所述目标对象的疾病类型输入所述目标部位分类模型,得到所述目标对象的目标部位。
参见图20,图20是本申请实施例提供的又一种视频处理装置的结构示意图,在一些可能的方式中,所述装置还可以包括:网络带宽模块108,用于获取所述目标对象对应的网络带宽数据;所述推流策略模块103用于基于所述目标对象的疾病类型和所述目标对象对应的网络带宽数据,获取所述实时视频数据的推流策略。
参见图21,图21是本申请实施例提供的又一种视频处理装置的部分结构示意图,在一些可能的方式中,所述装置还可以包括:显示策略模块109,用于基于所述目标对象的疾病类型,获取所述待推流数据的显示策略,所述显示策略用于指示预设尺寸、预设位置、预设亮度、预设对比度和预设饱和度中的一种或多种;待显示数据模块110,用于基于所述显示策略,确定所述待推流数据对应的待显示数据;数据显示模块111,用于利用显示设备显示所述待显示数据。
在一些可能的方式中,所述显示策略可以用于指示所述预设尺寸;所述装置还可以包括:目标部位模块106,用于基于所述目标对象的疾病类型,确定所述目标对象的目标部位;所述待显示数据模块110用于基于所述预设尺寸,对所述目标部位对应的所述待推流数据的第一部分进行缩放,得到所述待显示数据,以使所述目标部位显示在所述显示设备中的尺寸不小于所述预设尺寸,且使所述目标部位对应的所述待推流数据的第一部分完整显示在所述显示设备中。
参见图22,图22是本申请实施例提供的一种待显示数据模块的结构示意图,在一些可能的方式中,所述显示策略还可以用于指示所述预设位置;所述待显示数据模块110可 以包括:数据缩放单元501,用于基于所述预设尺寸,对所述目标部位对应的所述待推流数据的第一部分进行缩放,得到待平移数据;数据平移单元502,用于基于所述预设位置,对所述待平移数据进行平移,得到所述待显示数据,以使所述目标部位显示在所述显示设备中的预设位置。
参见图23,图23是本申请实施例提供的一种电子设备的结构示意图,本申请实施例还提供了一种电子设备200,电子设备200包括至少一个存储器210、至少一个处理器220以及连接不同平台系统的总线230。
存储器210可以包括易失性存储器形式的可读介质,例如随机存取存储器(RAM)211和/或高速缓存存储器212,还可以进一步包括只读存储器(ROM)213。
其中,存储器210还存储有计算机程序,计算机程序可以被处理器220执行,使得处理器220执行本申请实施例中视频处理方法的步骤,其具体实现方式与上述视频处理方法的实施例中记载的实施方式、所达到的技术效果一致,部分内容不再赘述。
存储器210还可以包括具有至少一个程序模块215的实用工具214,这样的程序模块215包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
相应的,处理器220可以执行上述计算机程序,以及可以执行实用工具214。
总线230可以为表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器、外围总线、图形加速端口、处理器或者使用多种总线结构中的任意总线结构的局域总线。
电子设备200也可以与一个或多个外部设备240例如键盘、指向设备、蓝牙设备等通信,还可与一个或者多个能够与该电子设备200交互的设备通信,和/或与使得该电子设备200能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等)通信。这种通信可以通过输入输出接口250进行。并且,电子设备200还可以通过网络适配器260与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。网络适配器260可以通过总线230与电子设备200的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备200使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理器、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数 据备份存储平台等。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质用于存储计算机程序,所述计算机程序被执行时实现本申请实施例中视频处理方法的步骤,其具体实现方式与上述视频处理方法的实施例中记载的实施方式、所达到的技术效果一致,部分内容不再赘述。
图24示出了本实施例提供的用于实现上述视频处理方法的程序产品300,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本申请的程序产品300不限于此,在本申请中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。程序产品300可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
计算机可读存储介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读存储介质还可以是任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等,或者上述的任意合适的组合。可以以一种或多种程序设计语言的任意组合来编写用于执行本申请操作的程序代码,程序设计语言包括面向对象的程序设计语言诸如Java、C++等,还包括常规的过程式程序设计语言诸如C语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设 备(例如利用因特网服务提供商来通过因特网连接)。
本申请从使用目的上,效能上,进步及新颖性等观点进行阐述,本申请以上的说明书及说明书附图,仅为本申请的较佳实施例而已,并非以此局限本申请,因此,凡一切与本申请构造,装置,特征等近似、雷同的,即凡依本申请专利申请范围所作的等同替换或修饰等,皆应属本申请的专利申请保护的范围之内。

Claims (20)

  1. 一种视频处理方法,所述方法包括:
    获取实时视频数据,所述实时视频数据是摄像头拍摄目标对象得到的;
    获取所述目标对象的疾病类型;
    基于所述目标对象的疾病类型,获取所述实时视频数据的推流策略,所述推流策略用于指示第一预设清晰度;
    基于所述推流策略,确定所述实时视频数据对应的待推流数据;
    将所述待推流数据推流至医生设备。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    基于所述目标对象的疾病类型,确定所述目标对象的目标部位;
    所述基于所述推流策略,确定所述实时视频数据对应的待推流数据,包括:
    基于所述第一预设清晰度,对所述目标部位对应的所述实时视频数据的第一部分进行清晰度调整,得到所述待推流数据。
  3. 根据权利要求2所述的方法,其中,所述方法还包括:
    获取第二预设清晰度,所述第二预设清晰度小于所述第一预设清晰度;
    所述基于所述第一预设清晰度,对所述目标部位对应的所述实时视频数据的第一部分进行清晰度调整,得到所述待推流数据,包括:
    基于所述第一预设清晰度,对所述目标部位对应的所述实时视频数据的第一部分进行清晰度调整,得到所述待推流数据的第一部分;
    基于所述第二预设清晰度,对所述实时视频数据的第二部分进行清晰度调整,得到所述待推流数据的第二部分,所述实时视频数据的第二部分是所述实时视频数据中第一部分以外的部分或者全部。
  4. 根据权利要求3所述的方法,其中,所述将所述待推流数据推流至医生设备,包括:
    基于所述待推流数据的第一部分和第二部分,合成得到所述待推流数据并推流至所述医生设备;或者,
    将所述待推流数据的第一部分和第二部分分别推流至所述医生设备。
  5. 根据权利要求2所述的方法,其中,所述基于所述目标对象的疾病类型,确定所述目标对象的目标部位,包括:
    获取多个样本对象的训练数据,每个样本对象的训练数据包括所述样本对象的疾病类型和目标部位;
    利用所述多个样本对象的训练数据训练深度学习模型,得到目标部位分类模型;
    将所述目标对象的疾病类型输入所述目标部位分类模型,得到所述目标对象的目标部位。
  6. 根据权利要求1所述的方法,其中,所述方法还包括:
    获取所述目标对象对应的网络带宽数据;
    所述基于所述目标对象的疾病类型,获取所述实时视频数据的推流策略,包括:
    基于所述目标对象的疾病类型和所述目标对象对应的网络带宽数据,获取所述实时视频数据的推流策略。
  7. 根据权利要求1所述的方法,其中,所述方法还包括:
    基于所述目标对象的疾病类型,获取所述待推流数据的显示策略,所述显示策略用于指示预设尺寸、预设位置、预设亮度、预设对比度和预设饱和度中的一种或多种;
    基于所述显示策略,确定所述待推流数据对应的待显示数据;
    利用显示设备显示所述待显示数据。
  8. 根据权利要求7所述的方法,其中,所述显示策略用于指示所述预设尺寸;
    所述方法还包括:
    基于所述目标对象的疾病类型,确定所述目标对象的目标部位;
    所述基于所述显示策略,确定所述待推流数据对应的待显示数据,包括:
    基于所述预设尺寸,对所述目标部位对应的所述待推流数据的第一部分进行缩放,得到所述待显示数据,以使所述目标部位显示在所述显示设备中的尺寸不小于所述预设尺寸,且使所述目标部位对应的所述待推流数据的第一部分完整显示在所述显示设备中。
  9. 根据权利要求8所述的方法,其中,所述显示策略还用于指示所述预设位置;
    所述基于所述预设尺寸,对所述目标部位对应的所述待推流数据的第一部分进行缩放,得到所述待显示数据,包括:
    基于所述预设尺寸,对所述目标部位对应的所述待推流数据的第一部分进行缩放,得到待平移数据;
    基于所述预设位置,对所述待平移数据进行平移,得到所述待显示数据,以使所述目标部位显示在所述显示设备中的预设位置。
  10. 一种视频处理装置,所述装置包括:
    视频获取模块,用于获取实时视频数据,所述实时视频数据是摄像头拍摄目标对象得到的;
    疾病类型模块,用于获取所述目标对象的疾病类型;
    推流策略模块,用于基于所述目标对象的疾病类型,获取所述实时视频数据的推流策略,所述推流策略用于指示第一预设清晰度;
    待推流数据模块,用于基于所述推流策略,确定所述实时视频数据对应的待推流数据;
    数据推流模块,用于将所述待推流数据推流至医生设备。
  11. 根据权利要求10所述的装置,其中,所述装置还包括:
    目标部位模块,用于基于所述目标对象的疾病类型,确定所述目标对象的目标部位;
    所述待推流数据模块用于:
    基于所述第一预设清晰度,对所述目标部位对应的所述实时视频数据的第一部分进行清晰度调整,得到所述待推流数据。
  12. 根据权利要求11所述的装置,其中,所述装置还包括:
    清晰度获取模块,用于获取第二预设清晰度,所述第二预设清晰度小于所述第一预设清晰度;
    所述待推流数据模块包括:
    第一推流调整单元,用于基于所述第一预设清晰度,对所述目标部位对应的所述实时视频数据的第一部分进行清晰度调整,得到所述待推流数据的第一部分;
    第二推流调整单元,用于基于所述第二预设清晰度,对所述实时视频数据的第二部分进行清晰度调整,得到所述待推流数据的第二部分,所述实时视频数据的第二部分是所述实时视频数据中第一部分以外的部分或者全部。
  13. 根据权利要求12所述的装置,其中,所述数据推流模块包括:
    合成推流单元,用于基于所述待推流数据的第一部分和第二部分,合成得到所述待推 流数据并推流至所述医生设备;或者,
    分别推流单元,用于将所述待推流数据的第一部分和第二部分分别推流至所述医生设备。
  14. 根据权利要求11所述的装置,其中,所述目标部位模块包括:
    训练数据单元,用于获取多个样本对象的训练数据,每个样本对象的训练数据包括所述样本对象的疾病类型和目标部位;
    模型训练单元,用于利用所述多个样本对象的训练数据训练深度学习模型,得到目标部位分类模型;
    类型输入单元,用于将所述目标对象的疾病类型输入所述目标部位分类模型,得到所述目标对象的目标部位。
  15. 根据权利要求10所述的装置,其中,所述装置还包括:
    网络带宽模块,用于获取所述目标对象对应的网络带宽数据;
    所述推流策略模块用于:
    基于所述目标对象的疾病类型和所述目标对象对应的网络带宽数据,获取所述实时视频数据的推流策略。
  16. 根据权利要求10所述的装置,其中,所述装置还包括:
    显示策略模块,用于基于所述目标对象的疾病类型,获取所述待推流数据的显示策略,所述显示策略用于指示预设尺寸、预设位置、预设亮度、预设对比度和预设饱和度中的一种或多种;
    待显示数据模块,用于基于所述显示策略,确定所述待推流数据对应的待显示数据;
    数据显示模块,用于利用显示设备显示所述待显示数据。
  17. 根据权利要求16所述的装置,其中,所述显示策略用于指示所述预设尺寸;
    所述装置还包括:目标部位模块,用于基于所述目标对象的疾病类型,确定所述目标对象的目标部位;
    所述待显示数据模块用于:
    基于所述预设尺寸,对所述目标部位对应的所述待推流数据的第一部分进行缩放,得到所述待显示数据,以使所述目标部位显示在所述显示设备中的尺寸不小于所述预设尺寸,且使所述目标部位对应的所述待推流数据的第一部分完整显示在所述显示设备中。
  18. 根据权利要求17所述的装置,其中,所述显示策略还用于指示所述预设位置;
    所述待显示数据模块包括:
    数据缩放单元,用于基于所述预设尺寸,对所述目标部位对应的所述待推流数据的第一部分进行缩放,得到待平移数据;
    数据平移单元,用于基于所述预设位置,对所述待平移数据进行平移,得到所述待显示数据,以使所述目标部位显示在所述显示设备中的预设位置。
  19. 一种电子设备,所述电子设备包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现权利要求1-9任一项所述方法的步骤。
  20. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-9任一项所述方法的步骤。
PCT/CN2022/092815 2021-07-20 2022-05-13 视频处理方法、装置、电子设备及计算机可读存储介质 WO2023000787A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110820705.8A CN113556571A (zh) 2021-07-20 2021-07-20 视频处理方法、装置、电子设备及计算机可读存储介质
CN202110820705.8 2021-07-20

Publications (1)

Publication Number Publication Date
WO2023000787A1 true WO2023000787A1 (zh) 2023-01-26

Family

ID=78103511

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/092815 WO2023000787A1 (zh) 2021-07-20 2022-05-13 视频处理方法、装置、电子设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN113556571A (zh)
WO (1) WO2023000787A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118576863A (zh) * 2024-07-31 2024-09-03 中国人民解放军总医院 Vr心理减压弹性恢复训练系统、设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113556571A (zh) * 2021-07-20 2021-10-26 苏州景昱医疗器械有限公司 视频处理方法、装置、电子设备及计算机可读存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001094963A (ja) * 1999-09-20 2001-04-06 Nippon Telegr & Teleph Corp <Ntt> 映像伝送方法と仲介サーバ装置とプログラム記録媒体
CN101669815A (zh) * 2009-09-22 2010-03-17 广东威创视讯科技股份有限公司 一种医学切片远程诊断的系统及其网络传输方法
US20130267873A1 (en) * 2012-04-10 2013-10-10 Mindray Ds Usa, Inc. Systems and methods for monitoring patients with real-time video
US20150305662A1 (en) * 2014-04-29 2015-10-29 Future Life, LLC Remote assessment of emotional status
CN105835069A (zh) * 2016-06-06 2016-08-10 李志华 智能家用保健机器人
US20170262582A1 (en) * 2016-03-10 2017-09-14 Ricoh Co., Ltd. Secure Real-Time Healthcare Information Streaming
CN111698553A (zh) * 2020-05-29 2020-09-22 维沃移动通信有限公司 视频处理方法、装置、电子设备及可读存储介质
CN113556571A (zh) * 2021-07-20 2021-10-26 苏州景昱医疗器械有限公司 视频处理方法、装置、电子设备及计算机可读存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130307972A1 (en) * 2012-05-20 2013-11-21 Transportation Security Enterprises, Inc. (Tse) System and method for providing a sensor and video protocol for a real time security data acquisition and integration system
CN106131615A (zh) * 2016-07-25 2016-11-16 北京小米移动软件有限公司 视频播放方法及装置
CN108521609B (zh) * 2018-02-27 2019-05-17 北京达佳互联信息技术有限公司 确定推送视频类型的方法、装置及终端
CN111225209B (zh) * 2018-11-23 2022-04-12 北京字节跳动网络技术有限公司 视频数据推流方法、装置、终端及存储介质
CN112019930A (zh) * 2020-07-26 2020-12-01 杭州皮克皮克科技有限公司 一种直播视频的互动显示方法及装置
CN111986793B (zh) * 2020-09-03 2023-09-19 深圳平安智慧医健科技有限公司 基于人工智能的导诊处理方法、装置、计算机设备及介质
CN112954464A (zh) * 2021-01-21 2021-06-11 百果园技术(新加坡)有限公司 一种基于网络异常预测的视频清晰度选择方法及装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001094963A (ja) * 1999-09-20 2001-04-06 Nippon Telegr & Teleph Corp <Ntt> 映像伝送方法と仲介サーバ装置とプログラム記録媒体
CN101669815A (zh) * 2009-09-22 2010-03-17 广东威创视讯科技股份有限公司 一种医学切片远程诊断的系统及其网络传输方法
US20130267873A1 (en) * 2012-04-10 2013-10-10 Mindray Ds Usa, Inc. Systems and methods for monitoring patients with real-time video
US20150305662A1 (en) * 2014-04-29 2015-10-29 Future Life, LLC Remote assessment of emotional status
US20170262582A1 (en) * 2016-03-10 2017-09-14 Ricoh Co., Ltd. Secure Real-Time Healthcare Information Streaming
CN105835069A (zh) * 2016-06-06 2016-08-10 李志华 智能家用保健机器人
CN111698553A (zh) * 2020-05-29 2020-09-22 维沃移动通信有限公司 视频处理方法、装置、电子设备及可读存储介质
CN113556571A (zh) * 2021-07-20 2021-10-26 苏州景昱医疗器械有限公司 视频处理方法、装置、电子设备及计算机可读存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118576863A (zh) * 2024-07-31 2024-09-03 中国人民解放军总医院 Vr心理减压弹性恢复训练系统、设备及存储介质

Also Published As

Publication number Publication date
CN113556571A (zh) 2021-10-26

Similar Documents

Publication Publication Date Title
WO2023000787A1 (zh) 视频处理方法、装置、电子设备及计算机可读存储介质
US6699187B2 (en) System and method for providing remote expert communications and video capabilities for use during a medical procedure
WO2022262495A1 (zh) 家用医疗设备的控制方法及相关装置
CA3071714C (en) Visualization system for deep brain stimulation
US9901740B2 (en) Clinician programming system and method
US20050033386A1 (en) System and method for remote programming of a medical device
WO2023005353A1 (zh) 基于多模态数据的配置信息获取装置及相关设备
CN102573989A (zh) 在植入式医疗设备中存储治疗区域的图像
DE10116361A1 (de) Funkmodul, Respirator, Überwachungsgerät dafür; Therapiegerät zur Durchführung der CPAP-Therapie, Überwachungsgerät dafür; Systeme sowie Verfahren
WO2023185410A1 (zh) 刺激电极导线的成像识别方法及相关装置
CN114842956B (zh) 控制设备、医疗系统及计算机可读存储介质
WO2024067449A1 (zh) 参数调节装置及其方法、程控设备、医疗系统、存储介质
CN113362946A (zh) 视频处理装置、电子设备及计算机可读存储介质
CN106777904B (zh) 远程可视化数据交互方法
WO2023000788A1 (zh) 参数比对方法、装置、电子设备及计算机可读存储介质
CN206880740U (zh) 高清手术观摩控制系统
WO2024041496A1 (zh) 充电提醒装置、植入式神经刺激系统及存储介质
CN115460986A (zh) 术后植入部位监测
JP2024521942A (ja) 目によるイメージとデジタル・イメージとの間の変換のための方法およびシステム
WO2023103740A1 (zh) 画面显示控制方法、设备、远程会诊系统及存储介质
WO2023226636A1 (zh) 控制器、植入式神经刺激系统及计算机可读存储介质
WO2023024881A1 (zh) 慢性病患者视频追溯方法及相关装置
WO2023011493A1 (zh) 双盲实验装置、电子设备、双盲实验系统及存储介质
CN107919166A (zh) 一种植入式医疗器械远程监控系统及方法
CN112843467A (zh) 视觉假体装置、系统及其控制方法、存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22844954

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22844954

Country of ref document: EP

Kind code of ref document: A1