CN112653866A - Terminal and video diagnosis method - Google Patents

Terminal and video diagnosis method Download PDF

Info

Publication number
CN112653866A
CN112653866A CN202110040230.0A CN202110040230A CN112653866A CN 112653866 A CN112653866 A CN 112653866A CN 202110040230 A CN202110040230 A CN 202110040230A CN 112653866 A CN112653866 A CN 112653866A
Authority
CN
China
Prior art keywords
video
terminal
acquired
target
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110040230.0A
Other languages
Chinese (zh)
Inventor
许丽星
王昕�
方鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Electronic Industry Holdings Co Ltd
Original Assignee
Qingdao Hisense Electronic Industry Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Electronic Industry Holdings Co Ltd filed Critical Qingdao Hisense Electronic Industry Holdings Co Ltd
Priority to CN202110040230.0A priority Critical patent/CN112653866A/en
Publication of CN112653866A publication Critical patent/CN112653866A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display

Abstract

The utility model relates to the technical field of computers, and discloses a terminal and a video doctor seeing method, and the terminal of this embodiment includes: the display component is used for displaying a video sent by opposite-end equipment for performing video treatment with the terminal; the processor is used for responding to a device selection instruction triggered by a user in the process of carrying out video treatment on a camera connected with an opposite terminal device through a camera interface of a terminal, and acquiring a first video containing a user part, which is acquired by target health detection equipment corresponding to the device selection instruction; splicing the first video and the second video acquired by the camera to obtain a target video; and sending the target video to the opposite terminal equipment for displaying. The doctor can accurately observe the specific part of the user by checking the target video displayed by the opposite terminal device, so that the diagnosis result is accurately given, and the condition that the condition of illness cannot be diagnosed when the video is used for visiting a doctor is reduced.

Description

Terminal and video diagnosis method
Technical Field
The disclosure relates to the technical field of computers, in particular to a terminal and a video treatment method.
Background
With the progress of science and technology, the medical industry is rapidly developed, and people pay more and more attention to the health. Patients go to hospitals and go through the boat, the vehicle and the labor, and a great deal of time and energy are consumed. In order to reduce the inconvenience of patients going to hospitals for medical treatment, video treatment is more and more popular.
In the related technology, a patient carries out video call with a doctor through a terminal, and the doctor realizes video diagnosis based on communication with the patient and observation of images collected through a camera of the terminal.
However, when the patient is unclear or the condition is complicated, the doctor cannot diagnose the condition through video.
Disclosure of Invention
The disclosure provides a terminal and a video treatment method, which are used for reducing the occurrence of the condition that the condition of an illness cannot be diagnosed when a video treatment is carried out.
In a first aspect, an embodiment of the present disclosure provides a terminal, where the terminal includes: the device comprises a camera interface, a display component and a processor;
the display component is used for displaying a video sent by opposite-end equipment which performs video treatment with the terminal;
the processor is used for responding to a device selection instruction triggered by a user in the process of performing video treatment on the camera and the opposite terminal device connected through the camera interface, and acquiring a first video containing a user part, which is acquired by target health detection equipment corresponding to the device selection instruction; splicing the first video and the second video acquired by the camera to obtain a target video; and sending the target video to the opposite terminal equipment for displaying.
According to the scheme, in the video treatment process, a device selection instruction triggered by a user is responded, a first video which comprises the user part and is acquired through target health detection equipment is obtained, and the first video reflects the condition of a specific part corresponding to the device selection instruction; the target video is obtained by splicing the first video and the second video collected by the camera, the target video reflects the state of the specific part and the state collected by the camera, the terminal sends the target video to the opposite terminal device, and a doctor can finely observe the specific part of the user by checking the target video displayed by the opposite terminal device, so that a diagnosis result is accurately given, and the situation that the condition of illness cannot be diagnosed when the video is used for visiting the doctor is reduced.
In some optional embodiments, the processor is specifically configured to:
zooming the first video based on the zoom ratio corresponding to the first video, and zooming the second video based on the zoom ratio corresponding to the second video;
and splicing the scaled first video and the scaled second video.
According to the scheme, the first video is zoomed based on the zoom ratio corresponding to the first video, the second video is zoomed based on the zoom ratio corresponding to the second video, the zoomed first video and the zoomed second video are spliced to obtain the target video, so that the first video is highlighted when a doctor needs to mainly check the condition of an affected part, and the doctor can observe the affected part more finely; the second video is highlighted when the doctor needs to mainly view the external overall condition, so that the doctor can more accurately determine the external overall change when the user performs a specific action.
In some optional embodiments, the processor is further configured to:
taking a first preset proportion as a scaling proportion corresponding to the first video; taking a second preset proportion as a scaling proportion corresponding to the second video; or
Responding to a scale setting instruction, and determining a scaling corresponding to the first video and a scaling corresponding to the second video which are carried in the scale setting instruction; or
Determining a first proportion and a second proportion corresponding to target information contained in audio data sent by the opposite terminal equipment according to a corresponding relation between preset information and proportions; and taking the first scale as the scaling corresponding to the first video, and taking the second scale as the scaling corresponding to the second video.
In some optional embodiments, before the processor, in response to a device selection instruction triggered by a user, acquires a first video including a user part acquired by a target health detection device corresponding to the device selection instruction, the processor is further configured to:
displaying identification information of health detection equipment connected with the terminal through the display component;
the processor is specifically configured to:
responding to the equipment selection instruction, and selecting the target health detection equipment from the health detection equipment connected with the terminal;
acquiring a first video acquired by the target health detection device.
In some optional embodiments, the processor is specifically configured to:
splicing the images in the first video and the images in the second video which are acquired at the same moment; or
Splicing the images in the first video and the images in the second video which are acquired at the same time; or
Matching the image in the first video and the image in the second video acquired within a preset time length; and splicing the image in the first video and the matched image in the second video.
In a second aspect, an embodiment of the present disclosure provides a video treatment method, applied to a terminal, where the method includes:
responding to an equipment selection instruction triggered by a user in the process of carrying out video treatment on a camera connected with an opposite terminal equipment through a camera interface of the terminal, and acquiring a first video containing a user part, which is acquired by target health detection equipment corresponding to the equipment selection instruction;
splicing the first video and a second video acquired by the camera to obtain a target video;
and sending the target video to the opposite terminal equipment for displaying.
In some optional embodiments, the stitching the first video and the second video captured by the camera includes:
zooming the first video based on the zoom ratio corresponding to the first video, and zooming the second video based on the zoom ratio corresponding to the second video;
and splicing the scaled first video and the scaled second video.
In some optional embodiments, the scaling corresponding to the first video and the scaling corresponding to the second video are determined by:
taking a first preset proportion as a scaling proportion corresponding to the first video; taking a second preset proportion as a scaling proportion corresponding to the second video; or
Responding to a scale setting instruction, and determining a scaling corresponding to the first video and a scaling corresponding to the second video which are carried in the scale setting instruction; or
Determining a first proportion and a second proportion corresponding to target information contained in audio data sent by the opposite terminal equipment according to a corresponding relation between preset information and proportions; taking the first scale as a scaling corresponding to the first video; and taking the second scale as the corresponding scaling of the second video.
In some optional embodiments, before acquiring, in response to a device selection instruction triggered by a user, a first video including a user part and acquired by a target health detection device corresponding to the device selection instruction, the method further includes:
displaying identification information of health detection equipment connected with the terminal through a display part of the terminal;
responding to an equipment selection instruction triggered by a user, and acquiring a first video containing a user part, which is acquired by target health detection equipment corresponding to the equipment selection instruction, wherein the first video comprises:
responding to the equipment selection instruction, and selecting the target health detection equipment from the health detection equipment connected with the terminal;
acquiring a first video acquired by the target health detection device.
In some optional embodiments, the stitching the first video and the second video captured by the camera includes:
splicing the images in the first video and the images in the second video which are acquired at the same moment; or
Splicing the images in the first video and the images in the second video which are acquired at the same time; or
Matching the image in the first video and the image in the second video acquired within a preset time length; and splicing the image in the first video and the matched image in the second video.
In a third aspect, an embodiment of the present disclosure provides a video clinic apparatus, including:
the video acquisition module is used for responding to a device selection instruction triggered by a user in the process of carrying out video treatment on a camera connected with opposite-end equipment through a camera interface of a terminal, and acquiring a first video containing a user part, which is acquired by target health detection equipment corresponding to the device selection instruction;
the video processing module is used for splicing the first video and the second video acquired by the camera to obtain a target video;
and the video sending module is used for sending the target video to the opposite terminal equipment for displaying.
In a fourth aspect, the present disclosure provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the video clinic method according to any one of the second aspects is implemented.
In addition, for technical effects brought by any one implementation manner of the second aspect to the fourth aspect, reference may be made to technical effects brought by different implementation manners of the first aspect, and details are not described here.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic view of an application scenario provided in the embodiment of the present disclosure;
fig. 2A is a block diagram of a hardware configuration of a terminal according to an embodiment of the present disclosure;
fig. 2B is a block diagram of a software structure of a terminal according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of a first video visit method provided by the embodiment of the present disclosure;
fig. 4A is a schematic view of a user interface of a first terminal according to an embodiment of the present disclosure;
fig. 4B is a schematic diagram of a user interface of a second terminal according to an embodiment of the disclosure;
fig. 5 is a schematic diagram of a first splicing provided by an embodiment of the present disclosure;
fig. 6A is a schematic view of a user interface of peer equipment provided in the embodiment of the present disclosure;
fig. 6B is a schematic view of a user interface of a third terminal according to an embodiment of the disclosure;
fig. 7 is a schematic flow chart of a second video visit method provided by the embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a second splice provided by embodiments of the present disclosure;
fig. 9 is a schematic structural diagram of a video clinic device according to an embodiment of the present disclosure;
fig. 10 is a schematic block diagram of a terminal according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure clearer, the present disclosure will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present disclosure, rather than all embodiments. All other embodiments, which can be derived by one of ordinary skill in the art from the embodiments disclosed herein without making any creative effort, shall fall within the scope of protection of the present disclosure.
The term "and/or" in the embodiments of the present disclosure describes an association relationship of associated objects, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present disclosure, "a plurality" means two or more unless otherwise specified.
The term "image" in the embodiments of the present disclosure may be a frame image, or may be a continuous multi-frame image, that is, a video.
In the description of the present disclosure, it is to be noted that, unless otherwise explicitly stated or limited, the term "connected" is to be interpreted broadly, e.g., as meaning directly connected or indirectly connected through an intermediate medium, or as meaning communicating between two devices. The specific meaning of the above terms in the present disclosure can be understood by those of ordinary skill in the art as appropriate.
Patients go to hospitals and go through the boat, the vehicle and the labor, and a great deal of time and energy are consumed. In order to reduce the inconvenience of patients going to hospitals for medical treatment, video treatment is more and more popular.
The terminal can send the whole image of the outside and the audio frequency of the patient that gather through the camera to the opposite terminal equipment, and the opposite terminal equipment can show the image that receives to play based on the audio frequency. Therefore, the patient can carry out video call with the doctor through the terminal, and the doctor can see a doctor through the video based on the communication with the patient and the observation of the images.
When the patient is unclear or complicated, the doctor needs to look at the affected part. However, the terminal is difficult to acquire a clear image of an affected part, that is, a doctor cannot perform fine observation on the affected part of the user, so that a diagnosis result is difficult to give, and the patient still needs to go to a hospital for a doctor. Therefore, the video visiting way is limited greatly. For example: the oral cavity or discomfort of the patient, the surface condition in the oral cavity can not be seen by the doctor, so that the diagnosis result or suggestion can not be given, and even the common oral ulcer patient can only take time to go to a hospital for a doctor.
The embodiment of the disclosure provides a terminal and a video treatment method in order to reduce the occurrence of the condition that the condition of an illness cannot be diagnosed in the video treatment. The present disclosure is described in further detail below with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1, a schematic view of an application scenario provided in the embodiment of the present disclosure is shown, where the application scenario includes a health detection device (in fig. 1, a health detection device 101 and a health detection device 102 are taken as examples, and there may be more or fewer health detection devices in an actual application), a terminal 200, and an opposite-end device 300.
The method comprises the steps that in the process of video treatment of a camera connected with opposite-end equipment 300 through a camera interface of the terminal 200, a first video containing a user part and acquired by target health detection equipment corresponding to an equipment selection instruction is acquired in response to the equipment selection instruction triggered by a user;
splicing the first video and a second video acquired by the camera to obtain a target video;
and sending the target video to the opposite terminal device 300 for displaying.
The target health detection device is one or more of the health detection devices (the health detection device 101 and/or the health detection device 102 in fig. 1) connected to the terminal, and the target health detection device is placed at a user part and can acquire a first video including the user part (such as a patient affected part).
The terminal 200 may perform data communication with the peer device 300 through a plurality of communication methods, such as a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. For example: the terminal 200 transmits the target video or the like to the counterpart device 300 through the WLAN.
The terminal 200 may perform data transmission with the target health detection device through various communication methods, such as WiFi, data line, bluetooth, and the like. For example: the target health detection device sends the collected first video to the terminal 200 through Bluetooth.
The application scenarios described above are merely examples of application scenarios for implementing the embodiments of the present disclosure, and the embodiments of the present disclosure are not limited to the application scenarios described above.
Fig. 2A shows a block diagram of a hardware configuration of the terminal 200 described above. In some embodiments, terminal 200 includes at least one of a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a processor 250, a display unit 260, an audio output interface 270, a memory, a power supply, and a user interface 280.
In some embodiments, the display unit 260 includes a display screen component for displaying a picture, and a driving component for driving image display, a component for receiving an image signal output from the processor, performing display of video content, image content, and a menu manipulation interface, and a user manipulation UI interface, and the like.
In some embodiments, the display component 260 may be at least one of a liquid crystal display, an OLED display, and a projection display, and may also be a projection device and a projection screen.
In some embodiments, the tuner demodulator 210 receives broadcast television signals via wired or wireless reception, and demodulates audio/video signals from a plurality of wireless or wired broadcast television signals.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, and other network communication protocol chips or near field communication protocol chips, and an infrared receiver. The terminal 200 may perform data transmission with the target health detection device or the peer device 300 through the communicator 220.
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for collecting ambient light intensity; alternatively, the detector 230 includes an image collector, such as a camera, which may be used to collect external environment scenes, attributes of the user, or user interaction gestures, or the detector 230 includes a sound collector, such as a microphone, which is used to receive external sounds.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: any one or more of a High Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, camera interface, and the like. The interface may be a composite input/output interface formed by the plurality of interfaces.
In some embodiments, the processor 250 and the modem 210 may be located in different separate devices, that is, the modem 210 may also be located in an external device of the main device where the processor 250 is located, such as an external set-top box.
In some embodiments, the processor 250 includes at least one of a Central Processing Unit (CPU), a video processor, an audio processor, a Graphic Processing Unit (GPU), a ram (random Access Memory), a ROM (Read-Only Memory), a first interface to an nth interface for input/output, a communication Bus (Bus), and the like.
In some embodiments, the CPU is configured to execute operating system and application program instructions stored in the memory, and to execute various application programs, data, and content in accordance with various interactive instructions that receive external inputs for ultimately displaying and playing various audiovisual content. The CPU processor may include a plurality of processors. E.g. comprising a main processor and one or more sub-processors.
In some embodiments, a graphics processor for generating various graphics objects, such as: at least one of an icon, an operation menu, and a user input instruction display figure. The graphic processor comprises an arithmetic unit, which performs operation by receiving various interactive instructions input by a user and displays various objects according to display attributes; the system also comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on the display component.
In some embodiments, the video processor is configured to receive an external video signal, and perform at least one of video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a signal that can be directly displayed or played on the terminal 200.
In some embodiments, the video processor includes at least one of a demultiplexing module, a video decoding module, an image composition module, a frame rate conversion module, a display formatting module, and the like. The demultiplexing module is used for demultiplexing the input audio and video data stream. And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like. And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display. And the frame rate conversion module is used for converting the frame rate of the input video. And the display formatting module is used for converting the received video output signal after the frame rate conversion, and changing the signal to be in accordance with the signal of the display format, such as an output RGB data signal.
In some embodiments, the audio processor is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform at least one of noise reduction, digital-to-analog conversion, and amplification processing to obtain a sound signal that can be played in the speaker.
In some embodiments, the user may input a user command on a Graphical User Interface (GUI) displayed on the display part 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface receives the user input command by recognizing the sound or gesture through the sensor.
In some embodiments, a "user interface" is a media interface for interaction and information exchange between an application or operating system and a user that enables conversion between an internal form of information and a form that is acceptable to the user. A commonly used presentation form of the User Interface is a Graphical User Interface (GUI), which refers to a User Interface related to computer operations and displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in the display screen of the terminal, where the control may include at least one of a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
In some embodiments, user interface 280 is an interface that can be used to receive control inputs (e.g., physical buttons on the body of a peer device, or the like).
In some embodiments, the system of the peer device may include a Kernel (Kernel), a command parser (shell), a file system, and an application. The kernel, shell, and file system together make up the basic operating system structure that allows users to manage files, run programs, and use the system. After power-on, the kernel is started, kernel space is activated, hardware is abstracted, hardware parameters are initialized, and virtual memory, a scheduler, signals and interprocess communication (IPC) are operated and maintained. And after the kernel is started, loading the Shell and the user application program. The application program is compiled into machine code after being started, and a process is formed.
A block diagram of the architectural configuration of the operating system of the terminal 200 is illustrated in fig. 2B. The operating system architecture comprises an application layer, a middleware layer and a kernel layer from top to bottom.
The application layer, the application programs built in the system and the non-system-level application programs belong to the application layer. Is responsible for direct interaction with the user. The application layer may include a plurality of applications, such as a setup application, a media center application, and the like. These applications may be implemented as Web applications that execute based on a WebKit engine, and in particular may be developed and executed based on HTML5, Cascading Style Sheets (CSS), JavaScript, and the like.
The middleware layer may provide some standardized interfaces to support the operation of various environments and systems. For example, the middleware layer may be implemented as multimedia and hypermedia information coding experts group (MHEG) middleware related to data broadcasting, DLNA middleware which is middleware related to communication with an external device, middleware which provides a browser environment in which each application program in the terminal operates, and the like.
The kernel layer provides core system services, such as: file management, memory management, process management, network management, system security authority management and the like. The kernel layer may be implemented as a kernel based on various operating systems, for example, a kernel based on the Linux operating system.
The kernel layer also provides communication between system software and hardware, and provides device driver services for various hardware, such as: providing a display driver for a display component, a camera driver for a camera, a WiFi driver for a WiFi module, an audio driver for an audio output interface, a power management driver for a Power Management (PM) module, etc.
In the embodiment of the present disclosure, the terminal may be a television, a mobile phone, a tablet computer, a Personal Computer (PC), and the like, which is not limited in this embodiment. The hardware configuration and the software structure of different terminals may be different, and thus both fig. 2A and fig. 2B are exemplary illustrations.
The peer device may also be a television, a mobile phone, a tablet computer, a PC, and the like, and the terminal and the peer device may be the same or different, which is not limited in this embodiment.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems in detail with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
Fig. 3 is a schematic flowchart of a first video visiting method provided in an embodiment of the present disclosure, which is applied to the terminal, and as shown in fig. 3, the method may include:
step S301: responding to a device selection instruction triggered by a user in the process of carrying out video treatment on a camera connected with an opposite terminal device through a camera interface of the terminal, and acquiring a first video containing a user part, which is acquired by a target health detection device corresponding to the device selection instruction.
In this embodiment, the terminal is connected with health detection devices, and the health detection devices respectively collect videos of different user parts. If the terminal directly acquires the video collected by the connected health detection device, the video may be confused, for example: the terminal acquires videos acquired by all connected health detection devices and sends the videos to the opposite terminal device, the opposite terminal device displays the videos, a doctor cannot well observe the acquired videos of the concerned part, and the video diagnosis effect is poor.
Based on this, in this embodiment, in response to the device selection instruction, the first video including the user part acquired by the target health detection device corresponding to the device selection instruction is acquired, so that the video of the part concerned by the doctor is obtained.
The present embodiment does not specifically limit the manner of obtaining the device selection instruction, for example:
the first mode is as follows: triggering an identification information display instruction in response to a preset key (an entity key arranged outside the terminal or a virtual key of a terminal user interface) being pressed; the terminal displays the identification information of the connected health detection equipment through the display screen based on the identification information display instruction, and the user triggers a selection instruction aiming at the corresponding health detection equipment by touching the identification information.
Referring to fig. 4A, after the terminal is connected to the peer device, the terminal side interface displays an image collected by the peer device, an image collected by a camera connected to the camera interface, and an "auxiliary device" key; the user enters the interface shown in fig. 4B by touching the "auxiliary device" button, and the interface displays the identification information of all the health detection devices currently connected to the terminal and the identification information of the camera connected to the camera interface. As described above, the terminal and the health detection device may perform data transmission in multiple communication modes, and the identification information is different depending on the communication mode. As shown in fig. 4B, the "camera interface" is an identifier corresponding to the camera, the USB1 is an identifier of a health detection device (referred to as a health detection device 1) connected to the terminal via the USB1, and the "health detection device 2" is an identifier corresponding to the health detection device 2 interacting with the terminal via wireless communication. The user selects the camera interface and the USB1, and triggers a selection instruction for the health detection device 1 after touching the 'confirm' key; and after the 'cancel' key is touched, the triggering of the equipment selection instruction is cancelled.
Fig. 4A and 4B are only possible implementations of the end user interface, and the embodiment is not limited thereto, for example, the "auxiliary device" in fig. 4A may be replaced by a "health detection device" or the like; fig. 4B may not display the "ok", "cancel", and the like keys.
The second mode is as follows: a user sends audio data carrying identification information of the selected health detection equipment to a terminal, and an equipment selection instruction is triggered; the terminal receives the voice of the user through the microphone, converts the voice into computer-readable input text, and determines the identification information of the selected health detection device according to the input text.
The third mode is as follows: the doctor sends the device selection instruction to the terminal through the peer device, and the specific sending mode may refer to the interaction mode between the terminal and the peer device in the above embodiment, which is not described herein again.
As described above, the target health detection device is one or more health detection devices connected to the terminal, and the first video including the user part and acquired by the target health detection device corresponding to the device selection instruction is acquired in response to the device selection instruction triggered by the user, which may be implemented by, but not limited to, the following manners:
responding to the equipment selection instruction, and selecting the target health detection equipment from the health detection equipment connected with the terminal;
acquiring a first video acquired by the target health detection device.
Taking the first way of obtaining the device selection instruction as an example: the user triggers a selection instruction for the corresponding health detection equipment by touching the identification information, the instruction carries the touched identification information, and the target health detection equipment corresponding to the touched identification information is selected from the identification information of the connected health detection equipment.
Step S302: and splicing the first video and the second video acquired by the camera to obtain a target video.
In this embodiment, the first video is an acquired image of a specific part of the user, and generally, when a doctor diagnoses, the doctor needs to check the external overall condition of the user in addition to the affected part condition. For example: when a user describes that the voice is painful and has foreign body sensation, a doctor needs to check the condition of swallowing action of the user or the condition of pressing a certain part of the neck at the same time besides checking the condition in the oral cavity; when the user is injured on his/her limb, the doctor needs to check the state of the user's joint movement or the state of pressing around the affected part, in addition to checking the state of skin breakage. Based on this, the terminal also needs to collect a second video of the whole external part of the user through the camera.
After the first video and the second video are obtained, the first video and the second video are directly sent to the opposite terminal device, the opposite terminal device displays the first video and the second video frame by frame according to the receiving time, the first video and the second video are displayed in a crossed mode, and a doctor can not well view the affected part condition and the external overall condition of a user. Based on this, the terminal needs to splice the first video and the second video, and the target video obtained by splicing can reflect the situation of the affected part and the external overall situation of the user.
In some optional embodiments, the splicing process of the first video and the second video may be implemented by, but not limited to, the following ways:
1) and splicing the images in the first video and the images in the second video which are acquired at the same moment.
In this embodiment, when the acquisition frequencies of the first video and the second video are the same and the acquisition frequency is lower, the image in the second video and the image in the first video acquired at the same acquisition time can be subjected to stitching processing. The video data comprises a time stamp of each frame of image, and the acquisition time of the image in the video is determined based on the time stamp.
2) And splicing the images in the first video and the images in the second video acquired at the same moment.
The time occupied by the acquisition time of the images in the video is determined based on the time stamp, the computing resources of the terminal are occupied, and the images in the first video and the images in the second video which are acquired at the same time can be spliced directly.
3) Matching the image in the first video and the image in the second video acquired within a preset time length; and splicing the image in the first video and the matched image in the second video.
The image in the first video and the image in the second video which are acquired at the same time are spliced, so that the method is not suitable for scenes with different acquisition frequencies of the first video and the second video. In some embodiments, images in multiple frames of second video and images in multiple frames of first video acquired within a certain period of time are matched, and the matched images are spliced. For example:
images in 10 frames of the first video are acquired in 30S, and are respectively marked as a first image 1, a first image 2, a first image 3 and a first image … … according to the sequence of the acquisition time from front to back; in this 30S, the images in the 30 frames of the second video are captured, and the images are referred to as a second image 1, a second image 2, a second image 3, and a second image … …, respectively, in the order from the capture time onward.
The first image 1 corresponds to the second image 1, the second image 2 and the second image 3; the first image 2 corresponds to the second image 4, the second image 5 and the second image 6; the first image 3 corresponds to the second image 7, the second image 8 and the second image 9; … …, respectively; the first image 10 corresponds to the second image 28, the second image 29 and the second image 30.
Splicing the first image 1 and the second image 1; splicing the first image 1 and the second image 2; the first image 1 is stitched to the second image 3. And performing splicing processing on the images in the other first videos and the images in the matched second video in the same mode, which is not described herein again.
Referring to fig. 5, the target health detection device is an oral magnifier, the first video is a captured video containing the interior of the oral cavity, and the first video is spliced on the right side of the second video.
Fig. 5 is only one possible splicing manner, and the first video may be spliced on the left side, above, or below the second video, and the specific splicing manner may be set according to an actual application scenario, which is not limited in this embodiment.
Step S303: and sending the target video to the opposite terminal equipment for displaying.
The specific implementation manner of the terminal sending the target video to the peer device may refer to the above embodiments, and details are not described here.
In this embodiment, a specific display manner of the peer device is not specifically limited, and referring to fig. 6A, the peer device may display the target video and the image acquired by the peer device through a display.
Fig. 6A is a possible implementation manner of a user interface of the peer device, and the peer device of this embodiment may also display other similar interfaces, such as only a target video, or may also display other keys.
In addition, referring to fig. 6B, the terminal may also display the target video and a video collected by the peer device.
Fig. 6B is a possible implementation manner of the user interface of the terminal, and the terminal of the present embodiment may also display other similar interfaces, for example, only the video collected by the peer device may be displayed, or other keys may also be displayed.
According to the scheme, in the video treatment process, a device selection instruction triggered by a user is responded, a first video which comprises the user part and is acquired through target health detection equipment is obtained, and the first video reflects the condition of a specific part corresponding to the device selection instruction; the target video is obtained by splicing the first video and the second video collected by the camera, the target video reflects the state of the specific part and the state collected by the camera, the terminal sends the target video to the opposite terminal device, and a doctor can finely observe the specific part of the user by checking the target video displayed by the opposite terminal device, so that a diagnosis result is accurately given, and the situation that the condition of illness cannot be diagnosed when the video is used for visiting the doctor is reduced.
Fig. 7 is a schematic flow chart of a second video visit method provided by an embodiment of the present disclosure, and as shown in fig. 7, the method may include:
step S701: responding to a device selection instruction triggered by a user in the process of carrying out video treatment on a camera connected with an opposite terminal device through a camera interface of the terminal, and acquiring a first video containing a user part, which is acquired by a target health detection device corresponding to the device selection instruction.
The step S701 is implemented in the same manner as the step S301, and is not described herein again.
Step S702: and zooming the first video based on the zoom ratio corresponding to the first video, and zooming the second video based on the zoom ratio corresponding to the second video.
In some scenes, a doctor needs to mainly look up the condition of the affected part, and at the moment, if the first video and the first video are directly spliced, the obtained target video cannot highlight the first video, and the doctor still cannot finely observe the affected part; in some scenes, a doctor needs to mainly view the external overall condition, and at the moment, if the first video and the first video are directly spliced, the obtained target video cannot highlight the second video, so that the doctor cannot accurately view the change condition of the external overall condition when a patient performs a specific action.
Based on this, in this embodiment, before the first video and the second video are spliced, the first video may be scaled based on the scaling corresponding to the first video, and the second video may be scaled based on the scaling corresponding to the second video.
The present embodiment may determine the scaling corresponding to the first video and the scaling corresponding to the second video by, but not limited to, the following ways:
the first mode is as follows: taking a first preset proportion as a scaling proportion corresponding to the first video; and taking the second preset proportion as the corresponding scaling proportion of the second video.
In this embodiment, the above ratio is preset in advance, and the first preset ratio is directly used as a scaling ratio corresponding to the first video, that is, the first video is enlarged or reduced according to the first preset ratio; and directly taking the second preset proportion as the corresponding scaling proportion of the second video, namely expanding or reducing the second video according to the second preset proportion.
The second mode is as follows: and responding to a scale setting instruction, and determining the scaling corresponding to the first video and the scaling corresponding to the second video carried in the scale setting instruction.
In this embodiment, the scale setting instruction carries a scaling corresponding to the first video and a scaling corresponding to the second video, and the scaling corresponding to the first video and the scaling corresponding to the second video carried in the scale setting instruction are determined in response to the scale setting instruction.
In this embodiment, the triggering manner of the ratio setting instruction is not specifically limited, and is as follows:
1) the terminal responds to the pressing of a preset key, displays a proportion setting interface through a display screen, and triggers a proportion setting instruction by touching the proportion displayed in the interface by a user or by self-defining the set proportion;
2) the user sends the audio data carrying the proportion to the terminal and triggers a proportion setting instruction;
3) the doctor sends the proportion setting instruction to the terminal through the opposite terminal device, and the specific sending mode may refer to the interaction mode between the terminal and the opposite terminal device in the above embodiment, which is not described herein again.
The third mode is as follows: determining a first proportion and a second proportion corresponding to target information contained in audio data sent by the opposite terminal equipment according to a corresponding relation between preset information and proportions; taking the first scale as a scaling corresponding to the first video; and taking the second scale as the corresponding scaling of the second video.
In this embodiment, a corresponding relationship between some key information and a proportion may be preset, where the key information represents a condition of important attention, for example: the step of pressing may be that the doctor requires the patient to do a pressing action, at this time, the doctor pays more attention to the external overall condition when the user does the pressing action, the target video is to highlight the second video, and the corresponding first proportion is smaller than the second proportion; the "inside" may be that the doctor needs to look at the lesion, and the target video highlights the first video, corresponding to a first ratio that is greater than the second ratio.
The above-mentioned several ways of determining the ratio are only exemplary, and the present embodiment may also use other ways to determine the ratio.
Step S703: and splicing the scaled first video and the scaled second video.
Fig. 8 shows a possible stitching method according to this embodiment, in which the size of the first video is different from the size of the second video, the first video is stitched to the right side of the second video, and the lower left corner of the first video is adjacent to the lower right corner of the second video.
Fig. 8 is only one possible splicing manner, and the first video may be spliced on the left side, above, or below the second video, and the specific splicing manner may be set according to an actual application scenario, which is not limited in this embodiment.
In some specific embodiments, before step S702, the first video may be cut according to a first preset length-width ratio, the second video may be cut according to a second preset length-width ratio, and step S702 is executed after the cutting, so that the target video better meeting the diagnosis requirement of the doctor can be obtained.
Step S704: and sending the target video to the opposite terminal equipment for displaying.
Step S704 is implemented in the same manner as step S303, and is not described herein again.
According to the scheme, the first video is zoomed based on the zoom ratio corresponding to the first video, the second video is zoomed based on the zoom ratio corresponding to the second video, the zoomed first video and the zoomed second video are spliced to obtain the target video, so that the first video is highlighted when a doctor needs to mainly check the condition of an affected part, and the doctor can observe the affected part more finely; the second video is highlighted when the doctor needs to mainly view the external overall condition, so that the doctor can more accurately determine the external overall change when the user performs a specific action.
In addition, the target health detection device may directly send the first video to the cloud server, and the step of performing the splicing processing may be performed by the cloud server.
As shown in fig. 9, based on the same inventive concept, the disclosed embodiment provides a video clinic device 900, which includes:
the video acquisition module 901 is configured to respond to a device selection instruction triggered by a user in a process of performing a video visit with an opposite terminal device through a camera connected to a camera interface of a terminal, and acquire a first video including a user part, acquired by a target health detection device corresponding to the device selection instruction;
the video processing module 902 is configured to splice the first video and the second video acquired by the camera to obtain a target video;
a video sending module 903, configured to send the target video to the peer device for display.
In some optional embodiments, the video processing module 902 performs a stitching process on the first video and the second video captured by the camera, including:
zooming the first video based on the zoom ratio corresponding to the first video, and zooming the second video based on the zoom ratio corresponding to the second video;
and splicing the scaled first video and the scaled second video.
In some optional embodiments, the video processing module 902 determines the corresponding scaling of the first video and the corresponding scaling of the second video by:
taking a first preset proportion as a scaling proportion corresponding to the first video; taking a second preset proportion as a scaling proportion corresponding to the second video; or
Responding to a scale setting instruction, and determining a scaling corresponding to the first video and a scaling corresponding to the second video which are carried in the scale setting instruction; or
Determining a first proportion and a second proportion corresponding to target information contained in audio data sent by the opposite terminal equipment according to a corresponding relation between preset information and proportions; taking the first scale as a scaling corresponding to the first video; and taking the second scale as the corresponding scaling of the second video.
In some optional embodiments, the video acquiring module 901, before responding to a device selection instruction triggered by a user and acquiring a first video including a user part, acquired by a target health detection device corresponding to the device selection instruction, is further configured to:
displaying identification information of health detection equipment connected with the terminal through a display part of the terminal;
the video obtaining module 901 responds to a device selection instruction triggered by a user to obtain a first video containing a user part, which is acquired by a target health detection device corresponding to the device selection instruction, and includes:
responding to the equipment selection instruction, and selecting the target health detection equipment from the health detection equipment connected with the terminal;
acquiring a first video acquired by the target health detection device.
In some optional embodiments, the video processing module 902 performs a stitching process on the first video and the second video captured by the camera, including:
splicing the images in the first video and the images in the second video which are acquired at the same moment; or
Splicing the images in the first video and the images in the second video which are acquired at the same time; or
Matching the image in the first video and the image in the second video acquired within a preset time length; and splicing the image in the first video and the matched image in the second video.
Since the apparatus is the apparatus in the method in the embodiment of the present disclosure, and the principle of the apparatus for solving the problem is similar to that of the method, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not repeated.
As shown in fig. 10, based on the same inventive concept, the disclosed embodiment provides a terminal 1000, including: a processor 1001 and a memory 1002;
wherein the memory 1002 stores program code which, when executed by the processor 1001, causes the processor 1001 to perform the following:
responding to an equipment selection instruction triggered by a user in the process of carrying out video treatment on a camera connected with an opposite terminal equipment through a camera interface of the terminal, and acquiring a first video containing a user part, which is acquired by target health detection equipment corresponding to the equipment selection instruction;
splicing the first video and the second video acquired by the camera to obtain a target video;
and sending the target video to the opposite terminal equipment for displaying.
In some optional embodiments, the processor 1001 is specifically configured to:
zooming the first video based on the zoom ratio corresponding to the first video, and zooming the second video based on the zoom ratio corresponding to the second video;
and splicing the scaled first video and the scaled second video.
In some optional embodiments, the processor 1001 is further configured to:
taking a first preset proportion as a scaling proportion corresponding to the first video; taking a second preset proportion as a scaling proportion corresponding to the second video; or
Responding to a scale setting instruction, and determining a scaling corresponding to the first video and a scaling corresponding to the second video which are carried in the scale setting instruction; or
Determining a first proportion and a second proportion corresponding to target information contained in audio data sent by the opposite terminal equipment according to a corresponding relation between preset information and proportions; and taking the first scale as the scaling corresponding to the first video, and taking the second scale as the scaling corresponding to the second video.
In some optional embodiments, before the processor 1001, in response to a device selection instruction triggered by a user, acquires a first video including a user part acquired by a target health detection device corresponding to the device selection instruction, the first video is further configured to:
displaying identification information of health detection equipment connected with the terminal through the display component;
the processor 1001 is specifically configured to:
responding to the equipment selection instruction, and selecting the target health detection equipment from the health detection equipment connected with the terminal;
acquiring a first video acquired by the target health detection device.
In some optional embodiments, the processor 1001 is specifically configured to:
splicing the images in the first video and the images in the second video which are acquired at the same moment; or
Splicing the images in the first video and the images in the second video which are acquired at the same time; or
Matching the image in the first video and the image in the second video acquired within a preset time length; and splicing the image in the first video and the matched image in the second video.
The specific connection medium between the memory 1002 and the processor 1001 is not limited in the embodiments of the present disclosure. In fig. 10, the memory 1002 and the processor 1001 are connected by a bus 1003, and the bus 1003 is represented by a thick line in fig. 10. The bus 1003 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
Since the terminal is a terminal that executes the method in the embodiment of the present disclosure, and the principle of the terminal to solve the problem is similar to that of the method, the implementation of the terminal may refer to the implementation of the method, and repeated details are not repeated.
The disclosed embodiments provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the video clinic method as described above. The readable storage medium may be a nonvolatile readable storage medium, among others.
The present disclosure is described above with reference to block diagrams and/or flowchart illustrations of methods, apparatus (systems) and/or computer program products according to embodiments of the disclosure. It will be understood that one block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, and/or other programmable video clinic apparatus, such that the instructions, which execute via the processor of the computer and/or other programmable video clinic apparatus, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
Accordingly, the present disclosure may also be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.). Still further, the present disclosure may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system. In the context of this disclosure, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
While preferred embodiments of the present disclosure have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the disclosure.
It will be apparent to those skilled in the art that various changes and modifications can be made in the present disclosure without departing from the spirit and scope of the disclosure. Thus, if such modifications and variations of the present disclosure fall within the scope of the claims of the present disclosure and their equivalents, the present disclosure is intended to include such modifications and variations as well.

Claims (10)

1. A terminal, characterized in that the terminal comprises: the device comprises a camera interface, a display component and a processor;
the display component is used for displaying a video sent by opposite-end equipment which performs video treatment with the terminal;
the processor is used for responding to a device selection instruction triggered by a user in the process of performing video treatment on the camera and the opposite terminal device connected through the camera interface, and acquiring a first video containing a user part, which is acquired by target health detection equipment corresponding to the device selection instruction; splicing the first video and the second video acquired by the camera to obtain a target video; and sending the target video to the opposite terminal equipment for displaying.
2. The terminal of claim 1, wherein the processor is specifically configured to:
zooming the first video based on the zoom ratio corresponding to the first video, and zooming the second video based on the zoom ratio corresponding to the second video;
and splicing the scaled first video and the scaled second video.
3. The terminal of claim 2, wherein the processor is further configured to:
taking a first preset proportion as a scaling proportion corresponding to the first video; taking a second preset proportion as a scaling proportion corresponding to the second video; or
Responding to a scale setting instruction, and determining a scaling corresponding to the first video and a scaling corresponding to the second video which are carried in the scale setting instruction; or
Determining a first proportion and a second proportion corresponding to target information contained in audio data sent by the opposite terminal equipment according to a corresponding relation between preset information and proportions; and taking the first scale as the scaling corresponding to the first video, and taking the second scale as the scaling corresponding to the second video.
4. The terminal of claim 1, wherein the processor, before responding to a device selection instruction triggered by a user and acquiring the first video including the user part acquired by the target health detection device corresponding to the device selection instruction, is further configured to:
displaying identification information of health detection equipment connected with the terminal through the display component;
the processor is specifically configured to:
responding to the equipment selection instruction, and selecting the target health detection equipment from the health detection equipment connected with the terminal;
acquiring a first video acquired by the target health detection device.
5. The terminal of any of claims 1 to 4, wherein the processor is specifically configured to:
splicing the images in the first video and the images in the second video which are acquired at the same moment; or
Splicing the images in the first video and the images in the second video which are acquired at the same time; or
Matching the image in the first video and the image in the second video acquired within a preset time length; and splicing the image in the first video and the matched image in the second video.
6. A video treatment method is applied to a terminal, and comprises the following steps:
responding to an equipment selection instruction triggered by a user in the process of carrying out video treatment on a camera connected with an opposite terminal equipment through a camera interface of the terminal, and acquiring a first video containing a user part, which is acquired by target health detection equipment corresponding to the equipment selection instruction;
splicing the first video and a second video acquired by the camera to obtain a target video;
and sending the target video to the opposite terminal equipment for displaying.
7. The method of claim 6, wherein stitching the first video and the second video captured by the camera comprises:
zooming the first video based on the zoom ratio corresponding to the first video, and zooming the second video based on the zoom ratio corresponding to the second video;
and splicing the scaled first video and the scaled second video.
8. The method of claim 7, wherein the scaling corresponding to the first video and the scaling corresponding to the second video are determined by:
taking a first preset proportion as a scaling proportion corresponding to the first video; taking a second preset proportion as a scaling proportion corresponding to the second video; or
Responding to a scale setting instruction, and determining a scaling corresponding to the first video and a scaling corresponding to the second video which are carried in the scale setting instruction; or
Determining a first proportion and a second proportion corresponding to target information contained in audio data sent by the opposite terminal equipment according to a corresponding relation between preset information and proportions; taking the first scale as a scaling corresponding to the first video; and taking the second scale as the corresponding scaling of the second video.
9. The method of claim 6, wherein before the step of acquiring, in response to a device selection command triggered by a user, the first video including the user portion and acquired by a target health detection device corresponding to the device selection command, further comprises:
displaying identification information of health detection equipment connected with the terminal through a display part of the terminal;
responding to an equipment selection instruction triggered by a user, and acquiring a first video containing a user part, which is acquired by target health detection equipment corresponding to the equipment selection instruction, wherein the first video comprises:
responding to the equipment selection instruction, and selecting the target health detection equipment from the health detection equipment connected with the terminal;
acquiring a first video acquired by the target health detection device.
10. The method according to any one of claims 6 to 9, wherein the stitching processing of the first video and the second video captured by the camera comprises:
splicing the images in the first video and the images in the second video which are acquired at the same moment; or
Splicing the images in the first video and the images in the second video which are acquired at the same time; or
Matching the image in the first video and the image in the second video acquired within a preset time length; and splicing the image in the first video and the matched image in the second video.
CN202110040230.0A 2021-01-13 2021-01-13 Terminal and video diagnosis method Pending CN112653866A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110040230.0A CN112653866A (en) 2021-01-13 2021-01-13 Terminal and video diagnosis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110040230.0A CN112653866A (en) 2021-01-13 2021-01-13 Terminal and video diagnosis method

Publications (1)

Publication Number Publication Date
CN112653866A true CN112653866A (en) 2021-04-13

Family

ID=75368114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110040230.0A Pending CN112653866A (en) 2021-01-13 2021-01-13 Terminal and video diagnosis method

Country Status (1)

Country Link
CN (1) CN112653866A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113397503A (en) * 2021-06-16 2021-09-17 苏州景昱医疗器械有限公司 Control method of household medical equipment and related device
CN115514913A (en) * 2022-09-16 2022-12-23 深圳市拓普智造科技有限公司 Video data processing method and device, electronic equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201601782U (en) * 2009-12-04 2010-10-06 中山爱科数字科技有限公司 Interactive remote video consultation device
CN202306552U (en) * 2011-06-24 2012-07-04 海纳医信(北京)软件科技有限责任公司 Remote real-time consultation system
CN202563507U (en) * 2012-02-27 2012-11-28 浙江浙大健康管理有限公司 Special equipment used for remote medical treatment
CN103500271A (en) * 2013-09-18 2014-01-08 李龙付 Visual interaction tele-medicine consultative service terminal
US20140136238A1 (en) * 2012-11-15 2014-05-15 Jonathan Simon Video archiving for on-line services
CN205507768U (en) * 2016-01-28 2016-08-24 秦皇岛光彩科技发展有限公司 Long -range medical services terminal of community
CN105956371A (en) * 2016-04-24 2016-09-21 芜湖云枫信息技术有限公司 Remote video interrogation system
CN106454203A (en) * 2016-09-27 2017-02-22 中电科软件信息服务有限公司 Mobile medical remote video consultation platform and method based on the internet
CN107169291A (en) * 2017-05-19 2017-09-15 四川鸣医科技有限公司 Real time remote medical treatment consultation system
CN110536088A (en) * 2019-09-24 2019-12-03 深圳华声医疗技术股份有限公司 Ultrasonic image-forming system device, ultrasonic imaging method
CN110782983A (en) * 2019-09-20 2020-02-11 杭州憶盛医疗科技有限公司 Remote dynamic pathological diagnosis method
CN210052537U (en) * 2019-08-12 2020-02-11 开封大学 Remote consultation system
CN111276206A (en) * 2020-03-30 2020-06-12 华药器械科技(苏州)有限公司 Remote outpatient service shelter monitoring method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201601782U (en) * 2009-12-04 2010-10-06 中山爱科数字科技有限公司 Interactive remote video consultation device
CN202306552U (en) * 2011-06-24 2012-07-04 海纳医信(北京)软件科技有限责任公司 Remote real-time consultation system
CN202563507U (en) * 2012-02-27 2012-11-28 浙江浙大健康管理有限公司 Special equipment used for remote medical treatment
US20140136238A1 (en) * 2012-11-15 2014-05-15 Jonathan Simon Video archiving for on-line services
CN103500271A (en) * 2013-09-18 2014-01-08 李龙付 Visual interaction tele-medicine consultative service terminal
CN205507768U (en) * 2016-01-28 2016-08-24 秦皇岛光彩科技发展有限公司 Long -range medical services terminal of community
CN105956371A (en) * 2016-04-24 2016-09-21 芜湖云枫信息技术有限公司 Remote video interrogation system
CN106454203A (en) * 2016-09-27 2017-02-22 中电科软件信息服务有限公司 Mobile medical remote video consultation platform and method based on the internet
CN107169291A (en) * 2017-05-19 2017-09-15 四川鸣医科技有限公司 Real time remote medical treatment consultation system
CN210052537U (en) * 2019-08-12 2020-02-11 开封大学 Remote consultation system
CN110782983A (en) * 2019-09-20 2020-02-11 杭州憶盛医疗科技有限公司 Remote dynamic pathological diagnosis method
CN110536088A (en) * 2019-09-24 2019-12-03 深圳华声医疗技术股份有限公司 Ultrasonic image-forming system device, ultrasonic imaging method
CN111276206A (en) * 2020-03-30 2020-06-12 华药器械科技(苏州)有限公司 Remote outpatient service shelter monitoring method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113397503A (en) * 2021-06-16 2021-09-17 苏州景昱医疗器械有限公司 Control method of household medical equipment and related device
WO2022262495A1 (en) * 2021-06-16 2022-12-22 苏州景昱医疗器械有限公司 Control method and related apparatus for household medical device
CN113397503B (en) * 2021-06-16 2022-12-27 苏州景昱医疗器械有限公司 Control method of household medical equipment and related device
CN115514913A (en) * 2022-09-16 2022-12-23 深圳市拓普智造科技有限公司 Video data processing method and device, electronic equipment and storage medium
CN115514913B (en) * 2022-09-16 2024-02-13 深圳市拓普智造科技有限公司 Video data processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
GB2590545A (en) Video photographing method and apparatus, electronic device and computer readable storage medium
CN112653866A (en) Terminal and video diagnosis method
JP2002358361A5 (en)
CN113727179A (en) Display device and method for display device to be compatible with external device
CN113747078B (en) Display device and focal length control method
CN111954043B (en) Information bar display method and display equipment
CN111836083B (en) Display device and screen sounding method
CN116017006A (en) Display device and method for establishing communication connection with power amplifier device
CN113014977B (en) Display device and volume display method
CN115190351B (en) Display equipment and media resource scaling control method
CN115082959A (en) Display device and image processing method
CN112911371B (en) Dual-channel video resource playing method and display equipment
CN112817679A (en) Display device and interface display method
CN115103144A (en) Display device and volume bar display method
CN113409220A (en) Face image processing method, device, medium and equipment
CN113596559A (en) Method for displaying information in information bar and display equipment
CN114942902A (en) Display device and multiplexing method of memory module thereof
CN112969099A (en) Camera device, first display equipment, second display equipment and video interaction method
CN113453069A (en) Display device and thumbnail generation method
CN113099308B (en) Content display method, display equipment and image collector
CN112261290B (en) Display device, camera and AI data synchronous transmission method
CN113053380B (en) Server and voice recognition method
CN113438553B (en) Display device awakening method and display device
CN113587812B (en) Display equipment, measuring method and device
CN113055977B (en) Method and device for scanning wireless hotspots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210413

RJ01 Rejection of invention patent application after publication