CN113395484A - Method and equipment for presenting microscopic sub-video information of target object - Google Patents

Method and equipment for presenting microscopic sub-video information of target object Download PDF

Info

Publication number
CN113395484A
CN113395484A CN202010172172.2A CN202010172172A CN113395484A CN 113395484 A CN113395484 A CN 113395484A CN 202010172172 A CN202010172172 A CN 202010172172A CN 113395484 A CN113395484 A CN 113395484A
Authority
CN
China
Prior art keywords
sub
microscopic
video information
target object
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010172172.2A
Other languages
Chinese (zh)
Inventor
张大庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pinghu Laidun Optical Instrument Manufacturing Co ltd
Original Assignee
Pinghu Laidun Optical Instrument Manufacturing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pinghu Laidun Optical Instrument Manufacturing Co ltd filed Critical Pinghu Laidun Optical Instrument Manufacturing Co ltd
Priority to CN202010172172.2A priority Critical patent/CN113395484A/en
Publication of CN113395484A publication Critical patent/CN113395484A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application aims to provide a method and equipment for presenting microscopic sub-video information of a target object, and the method and the equipment specifically comprise the following steps: acquiring a plurality of pieces of microscopic sub-video information about a target object, wherein the target object comprises a plurality of sub-areas, each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information comprises a plurality of pieces of microscopic image information based on a time sequence; and displaying the microscopic sub-video information corresponding to the plurality of sub-areas through a display device, wherein the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same moment in the displaying process. The method and the device can provide the microscopic data of the region of interest and the like for the user more conveniently and intuitively, are favorable for observers to observe and research the whole target object and the region of interest, and improve the use experience of the user.

Description

Method and equipment for presenting microscopic sub-video information of target object
Technical Field
The present application relates to the field of microscopic images, and more particularly, to a technique for presenting microscopic sub-video information of a target object.
Background
Microscopic Optical imaging, also commonly referred to as "Optical Microscopy," or "Light Microscopy," refers to a technique whereby visible Light transmitted through or reflected from a sample is passed through one or more lenses to produce a magnified image of the microscopic sample. The image can be observed directly by eyes through an ocular lens, recorded by a light-sensitive plate or a digital image detector such as CCD or CMOS, and displayed and analyzed on a computer. Of course, by combining with the camera device, it is also possible to record a video or the like about the specimen in the field of view. However, the field of view that can be observed by a microscope is limited, and when the size of the sample to be observed exceeds the current field of view, it is difficult to observe the state of the entire sample.
Disclosure of Invention
An object of the present application is to provide a method and apparatus for presenting microscopic sub-video information of a target object.
According to an aspect of the present application, there is provided a method of presenting microscopic sub-video information of a target object, the method comprising:
acquiring a plurality of pieces of microscopic sub-video information about a target object, wherein the target object comprises a plurality of sub-areas, each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information comprises a plurality of pieces of microscopic image information based on a time sequence;
and displaying the microscopic sub-video information corresponding to the plurality of sub-areas through a display device, wherein the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same moment in the displaying process.
According to another aspect of the present application, there is provided a method of presenting microscopic sub-video information of a target object, the method comprising:
a receiving terminal sends a microscopic sub-video request about a target object, wherein the microscopic sub-video request comprises identification information of the target object;
determining a plurality of pieces of microscopic sub-video information of the target object according to the identification information of the target object, wherein the target object comprises a plurality of sub-areas, each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information comprises a plurality of pieces of microscopic image information based on a time sequence;
and returning the plurality of pieces of micro sub video information of the target object to the terminal.
According to an aspect of the present application, there is provided an apparatus for presenting microscopic sub-video information of a target object, the apparatus comprising:
a one-to-one module, configured to acquire multiple pieces of microscopic sub-video information regarding a target object, where the target object includes multiple sub-regions, each sub-region corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes multiple pieces of microscopic image information based on a time sequence;
and the second module is used for presenting the microscopic sub-video information corresponding to the plurality of sub-areas through a display device, wherein the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at each same moment in the presentation process.
According to another aspect of the present application, there is provided an apparatus for presenting microscopic sub-video information of a target object, the apparatus comprising:
a first module, a second module, a third module, a fourth module and a fourth module, wherein the first module is used for receiving a microscopic sub video request about a target object sent by a terminal, and the microscopic sub video request comprises identification information of the target object;
a second module, configured to determine, according to the identification information of the target object, multiple pieces of microscopic sub-video information of the target object, where the target object includes multiple sub-regions, each sub-region corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes multiple pieces of microscopic image information based on a time sequence;
and the second module and the third module are used for returning the plurality of pieces of microscopic sub-video information of the target object to the terminal.
According to an aspect of the present application, there is provided an apparatus for presenting microscopic sub-video information of a target object, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of any of the methods described above.
According to one aspect of the application, there is provided a computer-readable medium storing instructions that, when executed, cause a system to perform the operations of any of the methods described above.
Compared with the prior art, the method and the device have the advantages that the multiple pieces of microscopic sub video information related to the target object are obtained, wherein the target object comprises multiple sub areas, each sub area corresponds to at least one piece of microscopic sub video information, and the microscopic sub video information comprises multiple pieces of microscopic image information based on time sequences; and then, presenting the microscopic sub-video information corresponding to the plurality of sub-areas through a display device, wherein the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same moment in the presentation process. The method and the device can present the plurality of microscopic sub-video information of the target object, not only can observe the whole situation of the target object through the plurality of microscopic sub-video information, but also the microscopic sub-video information of each sub-area contains detailed microscopic video information of each area, the corresponding microscopic sub-video information is presented based on the area which is interested by the user, microscopic data of the area of interest and the like can be more conveniently and intuitively provided for the user, the method and the device are favorable for observers to observe and research the whole situation of the target object and the area of interest, and the use experience of the user is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow diagram of a method of presenting microscopic sub-video information of a target object according to one embodiment of the present application;
FIG. 2 illustrates an example of microscopic sub-video information representing a target object according to one embodiment of the present application;
FIG. 3 illustrates a flow diagram of a system method for presenting microscopic sub-video information of a target object according to one embodiment of the present application;
FIG. 4 illustrates a flow diagram of a method of presenting microscopic sub-video information of a target object according to another embodiment of the present application;
FIG. 5 illustrates functional modules of an apparatus for presenting microscopic sub-video information of a target object according to one embodiment of the present application;
FIG. 6 shows functional modules of an apparatus for presenting microscopic sub-video information of a target object according to another embodiment of the present application;
FIG. 7 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., Central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include forms of volatile Memory, Random Access Memory (RAM), and/or non-volatile Memory in a computer-readable medium, such as Read Only Memory (ROM) or Flash Memory. Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random-Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The device referred to in this application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product, such as a smart phone, a tablet computer, etc., capable of performing human-computer interaction with a user (e.g., human-computer interaction through a touch panel), and the mobile electronic product may employ any operating system, such as an Android operating system, an iOS operating system, etc. The network Device includes an electronic Device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded Device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use in the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Fig. 1 illustrates a method of presenting microscopic sub-video information of a target object according to an aspect of the present application, wherein the method comprises step S101 and step S102. In step S101, a computing device acquires a plurality of pieces of microscopic sub-video information about a target object, wherein the target object includes a plurality of sub-regions, each sub-region corresponding to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes a plurality of pieces of microscopic image information based on a time sequence; in step S102, the computing device presents the microscopic sub-video information corresponding to the plurality of sub-areas through the display device, where the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same time in the presentation process. The method can be applied to a computing device including, but not limited to, a user device including, but not limited to, any terminal capable of human-computer interaction with a user (e.g., human-computer interaction via a touch pad), a network device including, but not limited to, a computer, a network host, a single network server, a plurality of network server sets, or a cloud including a plurality of servers, or a device formed by integrating the user device and the network device via a network. According to the method and the device, the multiple micro sub-video information of the target object is presented simultaneously, the on-screen display of the micro data of the multiple sub-areas of the target object can be realized, a more convenient and practical presentation means is provided for a user, more detailed and accurate observation and regulation and control can be carried out on each area of the target object, and the observation and regulation and control environment of the user on the target object is improved.
Specifically, in step S101, the computing device acquires a plurality of pieces of microscopic sub-video information about a target object, wherein the target object includes a plurality of sub-regions, each sub-region corresponding to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes a plurality of pieces of microscopic image information based on a time series. For example, the microscopic image information includes high magnification image information about the target object or a partial region of the target object acquired under an optical microscope or an electron microscope; the microscopic video information includes high-magnification video information of high magnification under an optical microscope or an electron microscope about a partial region of the target object, and the microscopic video information can be directly acquired based on a corresponding camera device or generated based on microscopic image information acquired by the camera device. The observation area of the target object is formed by combining a plurality of sub-areas under the current microscope lens, wherein each sub-area corresponds to a part corresponding to a target under one microscope lens, and the whole view field range covered by the plurality of sub-areas can contain the whole part of the target object; or based on user operation (such as frame selection operation) the computing device can divide a plurality of sub-areas selected by the user from the whole range of the target object. Here, the computing device includes an imaging means by which microscopic sub-video information about the target object is acquired or microscopic image information about a plurality of sub-regions of the target object is acquired and corresponding microscopic sub-image information is generated based on the plurality of microscopic image information; or, the computing device further comprises a communication device for establishing a communication connection with the other device and receiving the plurality of pieces of microscopic sub-video information about the target object sent by the other device through the communication connection.
In some embodiments, the microscopic sub video information is generated from a plurality of microscopic image information based on a time series of corresponding sub regions. For example, the microscopic image information includes image information about each subregion captured under microscopic conditions by the imaging device, each image information includes coordinate position information at the time of its capture, the subregion and the like corresponding to each image information can be specified based on the coordinate position information, each microscopic image information further includes a time node corresponding to the time at which the microscopic image information is captured, each time node includes a time corresponding to the capturing time, or a time interval ranging in a certain time length with the time as a center, such as a time interval of [ T-T0, T + T0], and the like. The time sequences formed by the time nodes of the microscopic image information corresponding to each sub-region can be the same or a common time sequence, and based on the same or the common time sequence, the computing device can determine a plurality of microscopic sub-video information, for example, the microscopic image information of each sub-region is sequenced according to the sequence of a certain time sequence, and a certain video parameter is set, so as to generate the corresponding microscopic sub-video information. As in some embodiments, the method further includes step S103 (not shown), the computing device arranges the plurality of microscopic image information according to a preset time sequence according to the time sequence of the plurality of microscopic image information, and generates microscopic sub video information of the corresponding sub area. For example, the computing device may sort the microscopic image information of the sub-region according to the time node corresponding to each piece of microscopic image information and a preset time sequence, and set a certain video parameter (such as playing 30 frames per second) to generate the corresponding microscopic sub-video information, where the preset time sequence may be set by a user or selected from time sequences set by a system, and the corresponding time sequence includes, but is not limited to, sorting according to a time sequence, selecting according to a time sequence interval, or selecting a certain time interval (a time interval is an interval in which a certain time node is a start time, another time node is an end time, and playing of the time interval is directed from the start time to the end time).
In step S102, the computing device presents the microscopic sub-video information corresponding to the plurality of sub-areas through the display device, where the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same time in the presentation process. For example, the display device is used for presenting the microscopic sub-video information, such as a display screen, a projector, and the like. If the target object is divided into four sub-areas, namely, 1-1, 1-2, 2-1 and 2-2 sub-areas, and four corresponding sub-video information with the duration of 1 minute exist in the four sub-areas, the computing device equally divides the four sub-areas in the current display device, such as four same rectangles corresponding to the upper left corner, the upper right corner, the lower left corner and the lower right corner, and presents the sub-video information corresponding to the sub-area 1-1 in the upper left corner, the sub-video information corresponding to the sub-area 1-2 in the upper right corner, the sub-video information corresponding to the sub-area 2-1 in the lower left corner, the sub-video information corresponding to the sub-area 2-2 in the lower right corner, and the video played in the four sub-areas at the same time (such as when the video in the upper left corner is played to 30.0 s) in the presenting process, the time nodes corresponding to the playing are the same, for example, the playing of other three areas is also the time nodes corresponding to 30.0 s. As another example, as shown in fig. 2, six sub-videos of the target object are respectively presented at different positions in the display apparatus: video 1-video 6; the six sub-videos respectively correspond to different sub-areas of the target object and the like, each sub-video corresponds to the same time sequence when being played, and the same time node on the time sequence is presented at the same time. Of course, those skilled in the art should understand that the above described sub-region division and sub-video presentation are only examples, and other existing or future sub-region division and sub-video presentations may be applicable to the present application and are included within the scope of the present application and are incorporated herein by reference.
In some embodiments, the microscopic sub video information includes, but is not limited to, two-dimensional microscopic sub video information, three-dimensional microscopic sub video information. For example, the two-dimensional microscopic sub video information includes two-dimensional microscopic sub video information of the target object with respect to a certain plane of the target object at the same focal plane height in a sub region of the plurality of sub regions, and the two-dimensional microscopic sub video information is determined based on microscopic image information of the sub region of the target object acquired at the same focal plane height of the sub region; the three-dimensional microscopic video information includes a plurality of pieces of three-dimensional microscopic image information about a three-dimensional model of a sub-area of a plurality of sub-areas of the target object, which are arranged in a certain order (for example, arranged in an order of corresponding time of each piece of three-dimensional microscopic image information), the three-dimensional microscopic video information includes three-dimensional microscopic image information of the sub-area at a plurality of times, each time corresponds to at least one piece of three-dimensional microscopic image information, and the three-dimensional microscopic image information includes image information including three-dimensional space coordinates of the sub-area, which is acquired by a microscopic imaging device, wherein the computing device is installed with a corresponding application or plug-in, through which each piece of three-dimensional microscopic image information can be completely presented, and of course, the three-dimensional microscopic video information can also be completely presented through the application or plug-in.
In some embodiments, the method further includes step S104 (not shown), in step S104, the computing device generates corresponding regulation instructions based on the regulation operation of the user on the plurality of pieces of microscope sub-video information; in step S102, the computing device presents the plurality of pieces of microscopic sub-video information through a display device according to the control instruction, where the microscopic sub-video information corresponding to each sub-region corresponds to the same time node in the time sequence at the same time in the presentation process. The control operation includes, but is not limited to, user's adjustment of a playing mode, a playing speed, a rendering view angle, a rendering position, a rendering window size, or other parameters of the multiple pieces of microscopic video information, and the corresponding control instruction includes, but is not limited to, adjustment instruction information of a playing mode, a playing speed, a rendering view angle, a rendering position, a rendering window size, and other parameters of the three-dimensional microscopic video information, and the like. The computing device further includes an input device for acquiring input information of the user device, such as a touch pad, a keyboard, a mouse, a touch screen, and the like, and the computing device may acquire a control operation of a user, such as a touch, a click, or a roller, and generate a corresponding control instruction. Here, in some embodiments, the regulatory directive information includes, but is not limited to, presenting at least one of the plurality of microscopic sub video information; zooming in or zooming out at least one of the plurality of microscopic sub-video information; pausing at least one of the plurality of microscopic sub video information; selecting a first time interval in the time sequence corresponding to the plurality of pieces of micro sub video information for presentation; and adjusting the presenting position of at least one of the plurality of pieces of micro sub video information. For example, the manipulation operations include, but are not limited to, user operations on an input device such as a mouse, a keyboard, or a touch screen, a microphone, etc., based on which the computing device may generate corresponding manipulation instructions.
For example, the manipulation instruction includes presenting at least one of the plurality of micro sub video information. For example, the target object is equally divided into four sub-regions 1-1, 1-2, 2-1, and 2-2, and there are four microscope sub-video information with a duration of 1 minute corresponding to the four sub-regions, the computing device equally divides the four regions in the current display apparatus, such as four same rectangles corresponding to the upper left corner, the upper right corner, the lower left corner, and the lower right corner, and presents the microscope sub-video information corresponding to the sub-region 1-1 at the upper left corner, the microscope sub-video information corresponding to the sub-region 1-2 at the upper right corner, the microscope sub-video information corresponding to the sub-region 2-1 at the lower left corner, and the microscope sub-video information corresponding to the sub-region 2-2 at the lower right corner, where the region of interest to the user in the four regions only includes the sub-regions corresponding to 1-1 and 2-2, for example, when a user clicks a selection operation on a touch screen, the computing device generates a corresponding regulation instruction, closes the microscope sub-video information of the corresponding sub-regions 1-2 and 2-1, and only presents the microscope sub-video information of the sub-regions 1-1 and 2-2, in some embodiments, the computing device redistributes the presentation region of each sub-region in the display device based on at least one presented sub-region, for example, the screen is divided into two rectangles with the same size or divided into rectangles desired by the user based on user requirements, and the microscope sub-video information of the corresponding sub-regions 1-1 and 2-2 is presented in the two rectangular regions respectively.
For example, the manipulation instruction includes enlarging or reducing at least one of the plurality of microscopic sub video information. For example, the target object is equally divided into four sub-regions 1-1, 1-2, 2-1, and 2-2, and there are four microscope sub-video information with a duration of 1 minute corresponding to the four sub-regions, the computing device equally divides the four regions in the current display apparatus, such as four same rectangles corresponding to the upper left corner, the upper right corner, the lower left corner, and the lower right corner, and presents the microscope sub-video information corresponding to the sub-region 1-1 at the upper left corner, the microscope sub-video information corresponding to the sub-region 1-2 at the upper right corner, the microscope sub-video information corresponding to the sub-region 2-1 at the lower left corner, and the microscope sub-video information corresponding to the sub-region 2-2 at the lower right corner, based on the operation of the user, such as the user performs a two-finger expansion operation on the touch screen on the region of the microscope sub-video information of the sub-region corresponding to the sub-region 1-1, and the like, and the computing equipment generates a corresponding regulation and control instruction for amplifying the microscopic sub-video information, and performs amplification operation on the microscopic sub-video information of the sub-area corresponding to the 1-1 by taking the center position of the two fingers of the user as the center, so as to further present details and the like which are interesting to the user in the microscopic sub-video information of the sub-area corresponding to the 1-1. For another example, based on the user performing a double-finger gathering operation on the microscopic sub-video information of the sub-region corresponding to 2-2 on the touch screen, the computing device generates an operation instruction for correspondingly reducing the microscopic sub-video information of the sub-region corresponding to 2-2, performs a video reducing operation on the microscopic sub-video information corresponding to the sub-region 2-2 with the center position of the double fingers as the center, and further presents further overall information of the microscopic sub-video information of the sub-region corresponding to 2-2 for the user to observe. In some embodiments, when at least one piece of the plurality of pieces of microscope sub-video information is enlarged or reduced, in addition to the enlargement or reduction of the piece of microscope sub-video information, a presentation window of the piece of microscope sub-video information may be enlarged or reduced, and the like, and if a user needs to carefully observe the piece of microscope sub-video information corresponding to the 2-1 sub-area, the user enlarges the presentation window of the piece of microscope sub-video information corresponding to the 2-1 sub-area, and simultaneously reduces presentation windows of the pieces of microscope sub-video information of other three sub-areas, and the like.
For example, the manipulation instruction includes pausing at least one of the plurality of pieces of microscopic sub video information, and the like. If the target object is equally divided into four sub-areas 1-1, 1-2, 2-1 and 2-2, and there are four microscope sub-video information with duration of 1 minute corresponding to the four sub-areas, the computing device equally divides the four sub-areas in the current display device, such as four same rectangles corresponding to the upper left corner, the upper right corner, the lower left corner and the lower right corner, and presents the microscope sub-video information corresponding to the sub-area 1-1 at the upper left corner, the microscope sub-video information corresponding to the sub-area 1-2 at the upper right corner, the microscope sub-video information corresponding to the sub-area 2-1 at the lower left corner, and the microscope sub-video information corresponding to the sub-area 2-2 at the lower right corner, and the user observes the image frame and the like in the microscope sub-video information of the sub-area 1-1, and based on the operation of the user, such as clicking the microscope sub-video information of the sub-area 1-1 on the touch screen or touching operation of the pause key The computing device generates a regulating instruction corresponding to the microscope sub-video information of the sub-area 1-1, and pauses the microscope sub-video information of the sub-area 1-1 to the currently played video frame, wherein in some embodiments, the pausing regulating instruction is effective for all the microscope sub-video information, for example, when the microscope sub-video information of the sub-area 1-1 is paused, the microscope sub-video information of other three sub-areas is paused at the same time; in other embodiments, the pause control instruction is only valid for the microscope sub-video information of the sub-area 1-1, for example, when the microscope sub-video information of the sub-area 1-1 is paused, the microscope sub-video information of other sub-areas is still played, which is convenient for the user to continue to observe the microscope data of other areas.
For example, the adjustment instruction includes selecting the plurality of pieces of micro sub video information to be presented corresponding to a first time interval in the time sequence. If the target object is equally divided into four sub-areas 1-1, 1-2, 2-1 and 2-2, and there are four microscopic sub-video information with a duration of 1 minute corresponding to the four sub-areas, the computing device equally divides the four sub-areas in the current display device, such as four same rectangles corresponding to the upper left corner, the upper right corner, the lower left corner and the lower right corner, and presents the microscopic sub-video information corresponding to the sub-area 1-1 at the upper left corner, the microscopic sub-video information corresponding to the sub-area 1-2 at the upper right corner, the microscopic sub-video information corresponding to the sub-area 2-1 at the lower left corner, the microscopic sub-video information corresponding to the sub-area 2-2 at the lower right corner, and the user wants to observe the video information corresponding to the time period of user interest in the microscopic sub-video information of the sub-area 1-, if the user sets the playing time period (from 15s to 45s and the like) in the microscopic sub-video information of the 1-1 sub-area, the computing device generates a corresponding control instruction, and starts playing from 15s to 45s in the microscopic sub-video information of the 1-1 sub-area, and in some embodiments, the playing setting is synchronized to all sub-areas, for example, the playing time periods of the other three sub-areas are also adjusted to 15s to 45s and the like.
For example, the adjustment instruction includes adjusting a presentation position of at least one of the plurality of pieces of micro sub video information. If the target object is equally divided into four sub-areas 1-1, 1-2, 2-1 and 2-2, and four microscope sub-video information with the duration of 1 minute exists in the corresponding four sub-areas, the computing device equally divides the four sub-areas in the current display device, such as four same rectangles corresponding to the upper left corner, the upper right corner, the lower left corner and the lower right corner, and presents the microscope sub-video information corresponding to the sub-area 1-1 in the upper left corner, the microscope sub-video information corresponding to the sub-area 1-2 in the upper right corner, the microscope sub-video information corresponding to the sub-area 2-1 in the lower left corner, and the microscope sub-video information corresponding to the sub-area 2-2 in the lower right corner, and when observing the microscope sub-video information of the sub-areas 1-1 and 2-2, the user observes the associated microscope data, and wants to play the two microscope sub-video information side by side, therefore, the related microscopic data can be conveniently compared and analyzed, a corresponding regulating and controlling instruction for regulating the presenting position of the microscopic sub-video information of the 2-2 sub-area is generated based on the user operation, such as dragging the window of the microscopic sub-video information of the 2-2 sub-area, the microscopic sub-video information of the 2-2 sub-area is regulated to the final position of the user dragging operation, and the displacement vector of the presenting position is that the starting point of the finger touch of the user on the touch screen points to the end point of the finger touch of the user; in some embodiments, based on the displacement vector of the starting point and the important point touched by the finger of the user on the screen, the computing device calculates an adaptive adjustment region of the microscopic sub-video information of the 2-2 sub-area, such as adjusting the microscopic sub-video information of the 2-2 sub-area to the presentation position corresponding to the original 1-2 sub-area, and adjusting the microscopic sub-video information corresponding to the 1-2 sub-area to the presentation position of the microscopic sub-video information of the 2-2 sub-area.
In some embodiments, the microscopic sub video information comprises three-dimensional microscopic sub video information; wherein the adjustment instruction includes without limitation adjusting a presentation perspective of at least one of the plurality of micro sub video information; and scrolling and presenting at least one of the plurality of micro sub video information. For example, when the microscopic sub video information includes three-dimensional microscopic sub video information about the target object, three-dimensional space coordinates and the like of each sub area can be presented in a specific application for each microscopic sub video information, and a presentation angle of viewing the sub area can be switched by rotating its angle of view and the like.
For example, the adjustment instruction includes adjusting a presentation perspective of at least one of the plurality of micro sub video information. For example, when a user observes three-dimensional microscopic sub-video information composed of a plurality of sub-regions of a certain embryo development process, based on mouse movement, touch movement operation, or input of a corresponding instruction of the user on the three-dimensional microscopic sub-video information of one or more of the sub-regions, the computing device generates a corresponding control instruction, where the control instruction includes switching a current presentation perspective of the three-dimensional microscopic sub-video information, such as switching a presentation perspective of the three-dimensional microscopic sub-video information from a front view to a side view, for the user to observe a development process of a certain embryo at a side perspective in the sub-region portion, and the like.
For example, the manipulation instruction includes scrolling presentation of at least one of the plurality of micro sub video information. For example, in the process of a certain embryo development process, a final viewing angle (e.g., a top view) is selected, the presentation viewing angle of the three-dimensional microscopic sub video of the sub-area is rotated within a certain playing time from a current viewing angle (e.g., a front view) at a certain rotation angle speed, and the sub-area is rotated within a certain playing time from a current viewing angle (e.g., a front view) based on a certain rotation angle speed, the effect of rolling and presenting the three-dimensional microscopic video of the embryo development of the corresponding part of a certain subregion is achieved.
In some embodiments, each of the sub-regions includes a corresponding plurality of microscopic sub-video information corresponding to at least one different microscopic parameter information; wherein the control instruction comprises switching microscopic sub-video information corresponding to at least one sub-region of the plurality of sub-regions based on corresponding microscopic parameter information. For example, the microscopic parameter information includes corresponding acquisition configuration information for acquiring each sub-region of the target object, including but not limited to focal plane height information; objective lens multiple information; lighting light color information; lighting lamp brightness information; fluorescence wavelength information; temperature information; humidity information; PH value information; polarized light angle information; DIC rotation angle information and altitude information, etc. And the computing equipment acquires the microscopic sub-video information corresponding to the microscopic parameter information according to the microscopic parameter information of at least one piece of microscopic sub-video information in the plurality of microscopic sub-video information. For example, if a user wants to observe microscopic data of red blood cells and white blood cells in a blood sample, the red blood cells can be observed with a ten-fold objective lens due to individual differences of the red blood cells and the white blood cells with a one-hundred-fold objective lens, the computing device obtains microscopic sub-video information of the ten-fold objective lens regarding the sub-region of the red blood cells and obtains microscopic sub-video information under the one-hundred-fold objective lens regarding the sub-region of the white blood cells; the microscope image information corresponding to the microscope video information or synthesizing the microscope video information by the computing device respectively contains data information corresponding to shooting under different microscope parameters (such as a ten-fold objective lens or a hundred-fold objective lens) so that the computing device can obtain the microscope video information or the microscope image information about the blood sample under the ten-fold objective lens and can also obtain the microscope video information about the blood sample under the hundred-fold objective lens, and therefore the effect of obtaining the corresponding microscope video information can be achieved when different microscope parameters are achieved. In some embodiments, the microscopic parameter information includes, but is not limited to: focal plane height information; objective lens multiple information; lighting light color information; lighting lamp brightness information; fluorescence wavelength information; temperature information; humidity information; PH value information; polarized light angle information; DIC rotation angle information and altitude information, etc. For example, the focal plane height information includes height information of a focal plane of the objective lens in a spatial coordinate system corresponding to the target object of the stage; the objective lens multiple information refers to the ratio of the size of an image seen by eyes to the size of a corresponding target object, and specifically refers to the ratio of the length rather than the ratio of the area; the lighting light color information comprises microscopic sub video information or color information of lighting light for assisting shooting when microscopic image information is shot; the lighting lamp brightness information comprises microscopic sub video information or brightness information of lighting lamp light for assisting shooting when microscopic image information is shot; the fluorescence refers to the radiation that a substance is excited after absorbing electromagnetic radiation, and the re-emission wavelength of excited atoms or molecules is the same as or different from the wavelength of the excited radiation in the de-excitation process, wherein the fluorescence wavelength information comprises the wavelength information of fluorescence used for assisting shooting when the video information of the microscope or the information of the microscope image is shot; the temperature information comprises temperature information of a solution in which the target object is positioned on the objective table when the microscopic sub-video information or the microscopic image information of the target object is acquired; the humidity information comprises humidity information of a solution in which the target object is positioned on the objective table when the microscopic sub video information or the microscopic image information of the target object is acquired; the PH value information comprises PH value information of a solution in which the target object is positioned on the objective table when the microscopic sub-video information or the microscopic image information of the target object is acquired.
Fig. 3 illustrates a method for presenting microscopic sub-video information of a target object, where the method is applied to a computing device, and the computing device includes a terminal and a cloud end, and the method includes:
the method comprises the steps that a terminal sends a microscopic sub-video request about a target object to a cloud, wherein the microscopic sub-video request comprises identification information of the target object;
the cloud end receives the micro sub video request, and determines a plurality of pieces of micro sub video information of the target object according to the identification information of the target object, wherein the target object comprises a plurality of sub areas, each sub area corresponds to at least one piece of micro sub video information, and the micro sub video information comprises a plurality of pieces of micro image information based on a time sequence;
the cloud returns the multiple pieces of microscopic sub-video information of the target object to the terminal;
and the terminal receives and displays the microscopic sub-video information corresponding to the plurality of sub-areas through a display device, wherein the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same moment in the display process.
In some embodiments, the computing device includes a terminal and a cloud, and when the plurality of pieces of display sub-video information are presented at the terminal, the cloud is configured to obtain the plurality of pieces of display sub-video information and send the plurality of pieces of display sub-video information to the terminal. If the step S101 includes the sub-step S1011 (not shown) and the sub-step S1012 (not shown), in the step S1011, the terminal sends a microscopic sub-video request about a target object to the cloud, wherein the microscopic sub-video request includes identification information of the target object; in step S1012, the terminal receives multiple pieces of microscopic sub-video information, which is returned by the cloud and relates to the target object, where the target object includes multiple sub-areas, each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes multiple pieces of microscopic image information based on a time sequence. For example, the terminal sends a microscopic sub-video request about a target object to the cloud, where the microscopic sub-video request includes identification information about the target object, where the identification information includes an identifier for determining corresponding three-dimensional microscopic video information, and the like, including but not limited to microscopic image sequence information of the target object, a key field of the target object, image information of the target object, microscopic record information of the target object, and unique identification code information of the target object. The cloud end stores the corresponding relation between the identification information of the target object and the microscopic image information or the plurality of microscopic sub-video information of the target object, determines the corresponding microscopic sub-video information based on the identification information of the target object uploaded by the user equipment, or determines the corresponding microscopic image information and determines the corresponding plurality of microscopic sub-video information based on the microscopic image information, then returns the plurality of microscopic sub-video information to the terminal, and the terminal receives and presents the microscopic sub-video information.
In some embodiments, the microscope sub-video request further includes microscope parameter information of the plurality of microscope sub-video information, wherein in step S1012, the terminal receives the plurality of microscope sub-video information corresponding to the microscope parameter information and about the target object, which includes a plurality of sub-regions, each of which corresponds to at least one microscope sub-video information, and the microscope sub-video information includes a plurality of microscope image information based on a time sequence. For example, the terminal sends identification information about a target object to a cloud, wherein the identification information includes identifiers and the like for determining corresponding multiple pieces of microscopic sub-video information, including but not limited to key fields of the target object; image information of the target object; microscopic recording information of the target object; unique identification code information of the target object; a plurality of microscopic image information of a plurality of sub-regions of the target object, and the like. The cloud end stores the corresponding relation between the identification information of the target object and the microscopic sub-video information or the microscopic image information of the target object, determines the corresponding multiple pieces of microscopic sub-video information or determines the corresponding microscopic image information based on the identification information of the target object uploaded by the terminal, and then returns the microscopic sub-video information to the terminal, and the terminal receives and presents the multiple pieces of microscopic sub-video information.
In some embodiments, in step S1012, the terminal receives access link information of a plurality of pieces of microscopic sub-video information about the target object returned by the cloud, wherein the target object includes a plurality of sub-areas, each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes a plurality of pieces of microscopic image information based on a time sequence; in step S102, the terminal accesses the corresponding web page according to the access link information, and presents the microscopic sub-video information corresponding to the plurality of sub-areas through the display device, where the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same time in the presentation process. For example, after the cloud determines the three-dimensional microscopic video information corresponding to the target object, the cloud does not directly return the corresponding multiple pieces of microscopic video information to the terminal, but generates a webpage corresponding to the three-dimensional microscopic video information, returns access link information corresponding to the webpage to the terminal, receives the access link information and enters the corresponding webpage through the access link information, requests the corresponding multiple pieces of microscopic video information from the cloud, and presents the multiple pieces of microscopic video information in the webpage.
In some embodiments, the identification information of the target object includes, but is not limited to: a key field of the target object; image information of the target object; microscopic recording information of the target object; unique identification code information of the target object; a plurality of microscopic image information of a plurality of sub-regions of the target object. For example, the identification information includes an identifier or the like for determining corresponding microscopic video information, including but not limited to a key field of the target object, such as a name of the target object or a keyword for searching the target object extracted from the name of the target object; the identification information comprises microscopic record information of the target object, such as historical records of microscopic image information or microscopic sub-video information of the target object, which are uploaded or searched by a user in an application; the unique identification code information of the target object, such as a unique identification code set in an application of the target object, and the like; the identification information may include a plurality of microscopic image information of the plurality of sub-regions of the target object, for example, a user directly sends the plurality of microscopic image information of the plurality of sub-regions of the target object to a cloud, the cloud generates a plurality of corresponding microscopic sub-video information, and returns the information to the terminal.
In some embodiments, the microscopic sub-video request includes identification information of at least one sub-region of a plurality of sub-regions of the target object; in step S1012, the terminal receives microscopic sub-video information returned by the cloud and corresponding to at least one sub-region of the multiple sub-regions of the target object, where each sub-region corresponds to at least one microscopic sub-video information, and the microscopic sub-video information includes multiple pieces of microscopic image information based on a time sequence; in step S102, the terminal presents the microscopic sub-video information corresponding to at least one of the sub-regions through the display device, where the microscopic sub-video information corresponding to each of the sub-regions in the at least one sub-region corresponds to the same time node in the time sequence at the same time in the presentation process. For example, when a user studies microscopic data of a target object, only one or more sub-regions are interested, the terminal sends identification information of the at least one sub-region to the cloud based on the operation of the user, the cloud determines corresponding at least one piece of microscopic sub-video information based on the identification information of the at least one sub-region of the target object uploaded by the terminal, or determines microscopic image information of the corresponding at least one sub-region and determines corresponding at least one piece of microscopic sub-video information based on the microscopic image information, then, the cloud returns the at least one piece of microscopic sub-video information to the terminal, and the terminal receives and presents the at least one piece of microscopic sub-video information. In some embodiments, the identification information of at least one of the plurality of sub-regions of the target object includes, but is not limited to: image information of at least one subregion of a plurality of subregions of the target object; microscopic recording information of at least one sub-region of a plurality of sub-regions of the target object; a plurality of microscopic image information of at least one sub-region of a plurality of sub-regions of the target object; image mark information on at least one of the plurality of sub-regions in the image information of the target object, and the like. For example, the identification information includes an identifier or the like for determining corresponding at least one piece of microscopic sub-video information, including but not limited to a key field of at least one sub-region of the target object, such as a name of at least one sub-region or a keyword or the like extracted from the name of at least one sub-region of the target object for searching the target object; the identification information may also include microscopic recording information of the at least one sub-region, such as a historical record of microscopic image information or microscopic sub-video information about the at least one sub-region, which is uploaded or searched by a user in an application; the unique identification code information of the at least one sub-area, such as a unique identification code set in an application by the at least one sub-area, and the like; the identification information may include microscopic image information of the at least one sub-region, for example, a user directly sends the microscopic image information of the at least one sub-region of the target object to the cloud, and the cloud generates corresponding at least one microscopic sub-video information and returns the information to the terminal.
Fig. 4 illustrates a method for presenting microscopic sub-video information of a target object, wherein the method is applied to a cloud, and the method includes step S201, step S202, and step S203. In step S201, the cloud receiving terminal sends a microscopic sub-video request about a target object, where the microscopic sub-video request includes identification information of the target object; in step S202, the cloud determines, according to the identification information of the target object, a plurality of pieces of microscopic sub-video information of the target object, where the target object includes a plurality of sub-regions, each sub-region corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes a plurality of pieces of microscopic image information based on a time sequence; in step S203, the cloud returns the plurality of pieces of microscopic sub-video information of the target object to the terminal. For example, the terminal sends a microscopic sub-video request about a target object to the cloud, where the microscopic sub-video request includes identification information about the target object, where the identification information includes an identifier for determining corresponding microscopic sub-video information and the like, including but not limited to microscopic image sequence information of the target object, a key field of the target object, image information of the target object, microscopic record information of the target object, and unique identification code information of the target object. The cloud end stores the corresponding relation between the identification information of the target object and the microscopic image information or the plurality of microscopic sub-video information of the target object, determines the corresponding microscopic sub-video information based on the identification information of the target object uploaded by the user equipment, or determines the corresponding microscopic image information and determines the corresponding plurality of microscopic sub-video information based on the microscopic image information, then returns the plurality of microscopic sub-video information to the terminal, and the terminal receives and presents the microscopic sub-video information. The terminal sends identification information about a target object to a cloud, wherein the identification information comprises identifiers and the like used for determining corresponding multiple pieces of micro sub video information, including but not limited to key fields of the target object; image information of the target object; microscopic recording information of the target object; unique identification code information of the target object; a plurality of microscopic image information of a plurality of sub-regions of the target object, and the like. The cloud end stores the corresponding relation between the identification information of the target object and the microscopic sub-video information or the microscopic image information of the target object, determines the corresponding multiple pieces of microscopic sub-video information or determines the corresponding microscopic image information based on the identification information of the target object uploaded by the terminal, and then returns the microscopic sub-video information to the terminal, and the terminal receives and presents the multiple pieces of microscopic sub-video information.
In some embodiments, in step S202, the cloud queries whether there are multiple pieces of microscopic sub-video information of the target object in a microscopic video database according to the identification information of the target object; if the plurality of microscopic image information does not exist, acquiring a plurality of microscopic image information of each sub-region of the plurality of sub-regions of the target object based on the time sequence, and generating microscopic sub-video information corresponding to each sub-region based on the plurality of microscopic image information of each sub-region based on the time sequence. For example, the cloud end has a microscope video database, a plurality of microscope video information of corresponding target objects are stored in the database, after the cloud end receives the identification information of the target objects, whether the corresponding microscope video information exists or not is searched in the microscope video database, and if the corresponding microscope video information exists, the corresponding three-dimensional microscope video information is directly returned; if the micro-image information does not exist, the corresponding micro-image information is further acquired and used for generating the corresponding micro-sub video information and the like.
In some embodiments, the microscopy sub-video request further includes microscopy parameter information for the plurality of microscopy sub-video information; in step S202, the cloud determines, according to the identification information of the target object, multiple pieces of microscopic sub-video information of the target object that meet the microscopic parameter information, where the target object includes multiple sub-regions, each sub-region corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes multiple pieces of microscopic image information based on a time sequence. For example, the microscope sub video information further includes microscope parameter information related to the acquisition of the microscope sub video information, and the microscope parameter information includes, but is not limited to: focal plane height information; objective lens multiple information; lighting light color information; lighting lamp brightness information; fluorescence wavelength information; temperature information; humidity information; PH value information; polarized light angle information; DIC rotation angle information and altitude information, etc. The microscope video request comprises corresponding microscope parameter information, the cloud end returns the corresponding microscope video information to the terminal based on the microscope parameter information, if a user wants to observe microscope data of red blood cells and white blood cells in a blood sample, the red blood cells can be observed by a ten-fold objective lens due to individual difference of the red blood cells and the white blood cells, the white blood cells are observed by a one-hundred-fold objective lens, the computing device sends identification information of the blood sample to the cloud end, the microscope parameter information corresponding to a red blood cell area comprises a ten-fold objective lens multiple of the red blood cells, the microscope parameter information corresponding to a white blood cell area comprises a one-hundred-fold objective lens multiple of the white blood cells, and the like, and the cloud end returns the microscope video information of the ten-fold objective lens related to a subregion of the red blood cells and the microscope video information related to a one-fold objective lens related to a subregion of the white blood cells according to the microscope video request.
In some embodiments, the microscopic sub-video request includes identification information of at least one sub-region of a plurality of sub-regions of the target object; in step S202, the cloud determines microscopic sub-video information of at least one sub-region of the plurality of sub-regions of the target object according to the identification information of the at least one sub-region, where each sub-region corresponds to at least one microscopic sub-video information, and the microscopic sub-video information includes a plurality of microscopic image information based on a time sequence. For example, when a user studies microscopic data of a target object, only one or more sub-regions are interested, the terminal sends identification information of the at least one sub-region to the cloud based on the operation of the user, the cloud determines corresponding at least one piece of microscopic sub-video information based on the identification information of the at least one sub-region of the target object uploaded by the terminal, or determines microscopic image information of the corresponding at least one sub-region and determines corresponding at least one piece of microscopic sub-video information based on the microscopic image information, then, the cloud returns the at least one piece of microscopic sub-video information to the terminal, and the terminal receives and presents the at least one piece of microscopic sub-video information. In some embodiments, the identification information of at least one of the plurality of sub-regions of the target object includes, but is not limited to: image information of at least one subregion of a plurality of subregions of the target object; microscopic recording information of at least one sub-region of a plurality of sub-regions of the target object; a plurality of microscopic image information of at least one sub-region of a plurality of sub-regions of the target object; image mark information on at least one of the plurality of sub-regions in the image information of the target object, and the like. For example, the identification information includes an identifier or the like for determining corresponding at least one piece of microscopic sub-video information, including but not limited to a key field of at least one sub-region of the target object, such as a name of at least one sub-region or a keyword or the like extracted from the name of at least one sub-region of the target object for searching the target object; the identification information may also include microscopic recording information of the at least one sub-region, such as a historical record of microscopic image information or microscopic sub-video information about the at least one sub-region, which is uploaded or searched by a user in an application; the unique identification code information of the at least one sub-area, such as a unique identification code set in an application by the at least one sub-area, and the like; the identification information may include microscopic image information of the at least one sub-region, for example, a user directly sends the microscopic image information of the at least one sub-region of the target object to the cloud, and the cloud generates corresponding at least one microscopic sub-video information and returns the information to the terminal.
Fig. 5 illustrates an apparatus for rendering microscopic sub-video information of a target object according to an aspect of the present application, wherein the apparatus comprises a one-module 101 and a two-module 102. A one-to-one module 101, configured to acquire multiple pieces of microscopic sub-video information about a target object, where the target object includes multiple sub-areas, each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes multiple pieces of microscopic image information based on a time sequence; a second module 102, configured to present, by using a display device, the microscopic sub-video information corresponding to the multiple sub-areas, where the microscopic sub-video information corresponding to each sub-area corresponds to a same time node in the time sequence at a same time in the presentation process. Here, the device is generally a computing device, and the computing device includes, but is not limited to, a user device, a network device, or a device formed by integrating the user device and the network device through a network, the user device includes, but is not limited to, any terminal capable of performing human-computer interaction with a user (for example, human-computer interaction through a touch pad), and the network device includes, but is not limited to, a computer, a network host, a single network server, a plurality of network server sets, or a cloud formed by a plurality of servers.
Specifically, the one-to-one module 101 is configured to obtain multiple pieces of microscopic sub-video information about a target object, where the target object includes multiple sub-areas, each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes multiple pieces of microscopic image information based on a time sequence. For example, the microscopic image information includes high magnification image information about the target object or a partial region of the target object acquired under an optical microscope or an electron microscope; the microscopic video information includes high-magnification video information of high magnification under an optical microscope or an electron microscope about a partial region of the target object, and the microscopic video information can be directly acquired based on a corresponding camera device or generated based on microscopic image information acquired by the camera device. The observation area of the target object is formed by combining a plurality of sub-areas under the current microscope lens, wherein each sub-area corresponds to a part corresponding to a target under one microscope lens, and the whole view field range covered by the plurality of sub-areas can contain the whole part of the target object; or based on user operation (such as frame selection operation) the computing device can divide a plurality of sub-areas selected by the user from the whole range of the target object. Here, the computing device includes an imaging means by which microscopic sub-video information about the target object is acquired or microscopic image information about a plurality of sub-regions of the target object is acquired and corresponding microscopic sub-image information is generated based on the plurality of microscopic image information; or, the computing device further comprises a communication device for establishing a communication connection with the other device and receiving the plurality of pieces of microscopic sub-video information about the target object sent by the other device through the communication connection.
In some embodiments, the microscopic sub video information is generated from a plurality of microscopic image information based on a time series of corresponding sub regions. For example, the microscopic image information includes image information about each subregion captured under microscopic conditions by the imaging device, each image information includes coordinate position information at the time of its capture, the subregion and the like corresponding to each image information can be specified based on the coordinate position information, each microscopic image information further includes a time node corresponding to the time at which the microscopic image information is captured, each time node includes a time corresponding to the capturing time, or a time interval ranging in a certain time length with the time as a center, such as a time interval of [ T-T0, T + T0], and the like. The time sequences formed by the time nodes of the microscopic image information corresponding to each sub-region can be the same or a common time sequence, and based on the same or the common time sequence, the computing device can determine a plurality of microscopic sub-video information, for example, the microscopic image information of each sub-region is sequenced according to the sequence of a certain time sequence, and a certain video parameter is set, so as to generate the corresponding microscopic sub-video information. As in some embodiments, the apparatus further includes a third module 103 (not shown), and the computing apparatus arranges the plurality of microscope image information according to the time sequence of the plurality of microscope image information according to a preset time sequence to generate microscope sub video information of the corresponding sub area. For example, the computing device may sort the microscopic image information of the sub-region according to the time node corresponding to each piece of microscopic image information and a preset time sequence, and set a certain video parameter (such as playing 30 frames per second) to generate the corresponding microscopic sub-video information, where the preset time sequence may be set by a user or selected from time sequences set by a system, and the corresponding time sequence includes, but is not limited to, sorting according to a time sequence, selecting according to a time sequence interval, or selecting a certain time interval (a time interval is an interval in which a certain time node is a start time, another time node is an end time, and playing of the time interval is directed from the start time to the end time).
A second module 102, configured to present, by using a display device, the microscopic sub-video information corresponding to the multiple sub-areas, where the microscopic sub-video information corresponding to each sub-area corresponds to a same time node in the time sequence at a same time in the presentation process. For example, the display device is used for presenting the microscopic sub-video information, such as a display screen, a projector, and the like. The computing equipment simultaneously presents the microscopic sub-video information corresponding to the multiple sub-areas through the display device, if the target object is equally divided into four sub-areas of 1-1, 1-2, 2-1 and 2-2, and the four sub-video information corresponding to the four sub-areas has four microscopic sub-video information with the duration of 1 minute, the computing equipment equally divides the four sub-areas in the current display device, such as four same rectangles corresponding to the upper left corner, the upper right corner, the lower left corner and the lower right corner, and presents the microscopic sub-video information corresponding to the sub-area of 1-1 at the upper left corner, the microscopic sub-video information corresponding to the sub-area of 1-2 at the upper right corner, the microscopic sub-video information corresponding to the sub-area of 2-1 at the lower left corner, and the microscopic sub-video information corresponding to the sub-area of 2-2 at the lower right corner, and the video played by the four sub-areas at the same time (such as when the upper left corner video is played to 30.0 s) in the presentation process, the time nodes corresponding to the playing are the same, for example, the playing of other three areas is also the time nodes corresponding to 30.0 s.
In some embodiments, the microscopic sub video information includes, but is not limited to, two-dimensional microscopic sub video information, three-dimensional microscopic sub video information. For example, the two-dimensional microscopic sub video information includes two-dimensional microscopic sub video information of the target object with respect to a certain plane of the target object at the same focal plane height in a sub region of the plurality of sub regions, and the two-dimensional microscopic sub video information is determined based on microscopic image information of the sub region of the target object acquired at the same focal plane height of the sub region; the three-dimensional microscopic video information includes a plurality of pieces of three-dimensional microscopic image information about a three-dimensional model of a sub-area of a plurality of sub-areas of the target object, which are arranged in a certain order (for example, arranged in an order of corresponding time of each piece of three-dimensional microscopic image information), the three-dimensional microscopic video information includes three-dimensional microscopic image information of the sub-area at a plurality of times, each time corresponds to at least one piece of three-dimensional microscopic image information, and the three-dimensional microscopic image information includes image information including three-dimensional space coordinates of the sub-area, which is acquired by a microscopic imaging device, wherein the computing device is installed with a corresponding application or plug-in, through which each piece of three-dimensional microscopic image information can be completely presented, and of course, the three-dimensional microscopic video information can also be completely presented through the application or plug-in.
In some embodiments, the apparatus further includes a fourth module 104 (not shown), where the fourth module 104 is configured to generate corresponding control commands based on user control operations on the plurality of pieces of micro sub video information; the second module 102 is configured to present the plurality of microscopic sub-video information through a display device according to the control instruction, where the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same time in the presentation process. The control operation includes, but is not limited to, user's adjustment of a playing mode, a playing speed, a rendering view angle, a rendering position, a rendering window size, or other parameters of the multiple pieces of microscopic video information, and the corresponding control instruction includes, but is not limited to, adjustment instruction information of a playing mode, a playing speed, a rendering view angle, a rendering position, a rendering window size, and other parameters of the three-dimensional microscopic video information, and the like. The computing device further includes an input device for acquiring input information of the user device, such as a touch pad, a keyboard, a mouse, a touch screen, and the like, and the computing device may acquire a control operation of a user, such as a touch, a click, or a roller, and generate a corresponding control instruction. Here, in some embodiments, the regulatory directive information includes, but is not limited to, presenting at least one of the plurality of microscopic sub video information; zooming in or zooming out at least one of the plurality of microscopic sub-video information; pausing at least one of the plurality of microscopic sub video information; selecting a first time interval in the time sequence corresponding to the plurality of pieces of micro sub video information for presentation; and adjusting the presenting position of at least one of the plurality of pieces of micro sub video information. For example, the manipulation operations include, but are not limited to, user operations on an input device such as a mouse, a keyboard, or a touch screen, a microphone, etc., based on which the computing device may generate corresponding manipulation instructions.
For example, the manipulation instruction includes presenting at least one of the plurality of micro sub video information. For example, the target object is equally divided into four sub-regions 1-1, 1-2, 2-1, and 2-2, and there are four microscope sub-video information with a duration of 1 minute corresponding to the four sub-regions, the computing device equally divides the four regions in the current display apparatus, such as four same rectangles corresponding to the upper left corner, the upper right corner, the lower left corner, and the lower right corner, and presents the microscope sub-video information corresponding to the sub-region 1-1 at the upper left corner, the microscope sub-video information corresponding to the sub-region 1-2 at the upper right corner, the microscope sub-video information corresponding to the sub-region 2-1 at the lower left corner, and the microscope sub-video information corresponding to the sub-region 2-2 at the lower right corner, where the region of interest to the user in the four regions only includes the sub-regions corresponding to 1-1 and 2-2, based on the operation of the user, for example, when a user clicks a selection operation on a touch screen, the computing device generates a corresponding regulation instruction, closes the microscope sub-video information of the corresponding sub-regions 1-2 and 2-1, and only presents the microscope sub-video information of the sub-regions 1-1 and 2-2, in some embodiments, the computing device redistributes the presentation region of each sub-region in the display device based on at least one presented sub-region, for example, the screen is divided into two rectangles with the same size or divided into rectangles desired by the user based on user requirements, and the microscope sub-video information of the corresponding sub-regions 1-1 and 2-2 is presented in the two rectangular regions respectively.
For example, the manipulation instruction includes enlarging or reducing at least one of the plurality of microscopic sub video information. For example, the target object is equally divided into four sub-regions 1-1, 1-2, 2-1, and 2-2, and there are four microscope sub-video information with a duration of 1 minute corresponding to the four sub-regions, the computing device equally divides the four regions in the current display apparatus, such as four same rectangles corresponding to the upper left corner, the upper right corner, the lower left corner, and the lower right corner, and presents the microscope sub-video information corresponding to the sub-region 1-1 at the upper left corner, the microscope sub-video information corresponding to the sub-region 1-2 at the upper right corner, the microscope sub-video information corresponding to the sub-region 2-1 at the lower left corner, and the microscope sub-video information corresponding to the sub-region 2-2 at the lower right corner, based on the operation of the user, such as the user performs a two-finger expansion operation on the touch screen on the region of the microscope sub-video information of the sub-region corresponding to the sub-region 1-1, and the like, and the computing equipment generates a corresponding regulation and control instruction for amplifying the microscopic sub-video information, and performs amplification operation on the microscopic sub-video information of the sub-area corresponding to the 1-1 by taking the center position of the two fingers of the user as the center, so as to further present details and the like which are interesting to the user in the microscopic sub-video information of the sub-area corresponding to the 1-1. For another example, based on the user performing a double-finger gathering operation on the microscopic sub-video information of the sub-region corresponding to 2-2 on the touch screen, the computing device generates an operation instruction for correspondingly reducing the microscopic sub-video information of the sub-region corresponding to 2-2, performs a video reducing operation on the microscopic sub-video information corresponding to the sub-region 2-2 with the center position of the double fingers as the center, and further presents further overall information of the microscopic sub-video information of the sub-region corresponding to 2-2 for the user to observe. In some embodiments, when at least one piece of the plurality of pieces of microscope sub-video information is enlarged or reduced, in addition to the enlargement or reduction of the piece of microscope sub-video information, a presentation window of the piece of microscope sub-video information may be enlarged or reduced, and the like, and if a user needs to carefully observe the piece of microscope sub-video information corresponding to the 2-1 sub-area, the user enlarges the presentation window of the piece of microscope sub-video information corresponding to the 2-1 sub-area, and simultaneously reduces presentation windows of the pieces of microscope sub-video information of other three sub-areas, and the like.
For example, the manipulation instruction includes pausing at least one of the plurality of pieces of microscopic sub video information, and the like. If the target object is equally divided into four sub-areas 1-1, 1-2, 2-1 and 2-2, and there are four microscope sub-video information with duration of 1 minute corresponding to the four sub-areas, the computing device equally divides the four sub-areas in the current display device, such as four same rectangles corresponding to the upper left corner, the upper right corner, the lower left corner and the lower right corner, and presents the microscope sub-video information corresponding to the sub-area 1-1 at the upper left corner, the microscope sub-video information corresponding to the sub-area 1-2 at the upper right corner, the microscope sub-video information corresponding to the sub-area 2-1 at the lower left corner, and the microscope sub-video information corresponding to the sub-area 2-2 at the lower right corner, and the user observes the image frame and the like in the microscope sub-video information of the sub-area 1-1, and based on the operation of the user, such as clicking the microscope sub-video information of the sub-area 1-1 on the touch screen or touching operation of the pause key The computing device generates a regulating instruction corresponding to the microscope sub-video information of the sub-area 1-1, and pauses the microscope sub-video information of the sub-area 1-1 to the currently played video frame, wherein in some embodiments, the pausing regulating instruction is effective for all the microscope sub-video information, for example, when the microscope sub-video information of the sub-area 1-1 is paused, the microscope sub-video information of other three sub-areas is paused at the same time; in other embodiments, the pause control instruction is only valid for the microscope sub-video information of the sub-area 1-1, for example, when the microscope sub-video information of the sub-area 1-1 is paused, the microscope sub-video information of other sub-areas is still played, which is convenient for the user to continue to observe the microscope data of other areas.
For example, the adjustment instruction includes selecting the plurality of pieces of micro sub video information to be presented corresponding to a first time interval in the time sequence. If the target object is equally divided into four sub-areas 1-1, 1-2, 2-1 and 2-2, and there are four microscopic sub-video information with a duration of 1 minute corresponding to the four sub-areas, the computing device equally divides the four sub-areas in the current display device, such as four same rectangles corresponding to the upper left corner, the upper right corner, the lower left corner and the lower right corner, and presents the microscopic sub-video information corresponding to the sub-area 1-1 at the upper left corner, the microscopic sub-video information corresponding to the sub-area 1-2 at the upper right corner, the microscopic sub-video information corresponding to the sub-area 2-1 at the lower left corner, the microscopic sub-video information corresponding to the sub-area 2-2 at the lower right corner, and the user wants to observe the video information corresponding to the time period of user interest in the microscopic sub-video information of the sub-area 1-1, based on the operation of the user, if the user sets the playing time period (from 15s to 45s and the like) in the microscopic sub-video information of the 1-1 sub-area, the computing device generates a corresponding control instruction, and starts playing from 15s to 45s in the microscopic sub-video information of the 1-1 sub-area, and in some embodiments, the playing setting is synchronized to all sub-areas, for example, the playing time periods of the other three sub-areas are also adjusted to 15s to 45s and the like.
For example, the adjustment instruction includes adjusting a presentation position of at least one of the plurality of pieces of micro sub video information. If the target object is equally divided into four sub-areas 1-1, 1-2, 2-1 and 2-2, and there are four microscope sub-video information with a duration of 1 minute corresponding to the four sub-areas, the computing device equally divides the four sub-areas in the current display device, such as four same rectangles corresponding to the upper left corner, the upper right corner, the lower left corner and the lower right corner, and presents the microscope sub-video information corresponding to the sub-area 1-1 at the upper left corner, the microscope sub-video information corresponding to the sub-area 1-2 at the upper right corner, the microscope sub-video information corresponding to the sub-area 2-1 at the lower left corner, and the microscope sub-video information corresponding to the sub-area 2-2 at the lower right corner, and when observing the microscope sub-video information of the sub-areas 1-1 and 2-2, the user observes the associated microscope data, and wants to play the two microscope sub-video information side by side, therefore, the related microscopic data can be conveniently compared and analyzed, a corresponding regulating and controlling instruction for regulating the presenting position of the microscopic sub-video information of the 2-2 sub-area is generated based on the user operation, such as dragging the window of the microscopic sub-video information of the 2-2 sub-area, the microscopic sub-video information of the 2-2 sub-area is regulated to the final position of the user dragging operation, and the displacement vector of the presenting position is that the starting point of the finger touch of the user on the touch screen points to the end point of the finger touch of the user; in some embodiments, based on the displacement vector of the starting point and the important point touched by the finger of the user on the screen, the computing device calculates an adaptive adjustment region of the microscopic sub-video information of the 2-2 sub-area, such as adjusting the microscopic sub-video information of the 2-2 sub-area to the presentation position corresponding to the original 1-2 sub-area, and adjusting the microscopic sub-video information corresponding to the 1-2 sub-area to the presentation position of the microscopic sub-video information of the 2-2 sub-area.
In some embodiments, the microscopic sub video information comprises three-dimensional microscopic sub video information; wherein the adjustment instruction includes without limitation adjusting a presentation perspective of at least one of the plurality of micro sub video information; and scrolling and presenting at least one of the plurality of micro sub video information. For example, when the microscopic sub video information includes three-dimensional microscopic sub video information about the target object, three-dimensional space coordinates and the like of each sub area can be presented in a specific application for each microscopic sub video information, and a presentation angle of viewing the sub area can be switched by rotating its angle of view and the like.
For example, the adjustment instruction includes adjusting a presentation perspective of at least one of the plurality of micro sub video information. For example, when a user observes three-dimensional microscopic sub-video information composed of a plurality of sub-regions of a certain embryo development process, based on mouse movement, touch movement operation, or input of a corresponding instruction of the user on the three-dimensional microscopic sub-video information of one or more of the sub-regions, the computing device generates a corresponding control instruction, where the control instruction includes switching a current presentation perspective of the three-dimensional microscopic sub-video information, such as switching a presentation perspective of the three-dimensional microscopic sub-video information from a front view to a side view, for the user to observe a development process of a certain embryo at a side perspective in the sub-region portion, and the like.
For example, the manipulation instruction includes scrolling presentation of at least one of the plurality of micro sub video information. For example, in the process of a certain embryo development process, a final viewing angle (e.g., a top view) is selected, the presentation viewing angle of the three-dimensional microscopic sub video of the sub-area is rotated within a certain playing time from a current viewing angle (e.g., a front view) at a certain rotation angle speed, and the sub-area is rotated within a certain playing time from a current viewing angle (e.g., a front view) based on a certain rotation angle speed, the effect of rolling and presenting the three-dimensional microscopic video of the embryo development of the corresponding part of a certain subregion is achieved.
In some embodiments, each of the sub-regions includes a corresponding plurality of microscopic sub-video information corresponding to at least one different microscopic parameter information; wherein the control instruction comprises switching microscopic sub-video information corresponding to at least one sub-region of the plurality of sub-regions based on corresponding microscopic parameter information. For example, the microscopic parameter information includes corresponding acquisition configuration information for acquiring each sub-region of the target object, including but not limited to focal plane height information; objective lens multiple information; lighting light color information; lighting lamp brightness information; fluorescence wavelength information; temperature information; humidity information; PH value information; polarized light angle information; DIC rotation angle information and altitude information, etc. And the computing equipment acquires the microscopic sub-video information corresponding to the microscopic parameter information according to the microscopic parameter information of at least one piece of microscopic sub-video information in the plurality of microscopic sub-video information. For example, if a user wants to observe microscopic data of red blood cells and white blood cells in a blood sample, the red blood cells can be observed with a ten-fold objective lens due to individual differences of the red blood cells and the white blood cells with a one-hundred-fold objective lens, the computing device obtains microscopic sub-video information of the ten-fold objective lens regarding the sub-region of the red blood cells and obtains microscopic sub-video information under the one-hundred-fold objective lens regarding the sub-region of the white blood cells; the microscope image information corresponding to the microscope video information or synthesizing the microscope video information by the computing device respectively contains data information corresponding to shooting under different microscope parameters (such as a ten-fold objective lens or a hundred-fold objective lens) so that the computing device can obtain the microscope video information or the microscope image information about the blood sample under the ten-fold objective lens and can also obtain the microscope video information about the blood sample under the hundred-fold objective lens, and therefore the effect of obtaining the corresponding microscope video information can be achieved when different microscope parameters are achieved. In some embodiments, the microscopic parameter information includes, but is not limited to: focal plane height information; objective lens multiple information; lighting light color information; lighting lamp brightness information; fluorescence wavelength information; temperature information; humidity information; PH value information; polarized light angle information; DIC rotation angle information and altitude information, etc. For example, the focal plane height information includes height information of a focal plane of the objective lens in a spatial coordinate system corresponding to the target object of the stage; the objective lens multiple information refers to the ratio of the size of an image seen by eyes to the size of a corresponding target object, and specifically refers to the ratio of the length rather than the ratio of the area; the lighting light color information comprises microscopic sub video information or color information of lighting light for assisting shooting when microscopic image information is shot; the lighting lamp brightness information comprises microscopic sub video information or brightness information of lighting lamp light for assisting shooting when microscopic image information is shot; the fluorescence refers to the radiation that a substance is excited after absorbing electromagnetic radiation, and the re-emission wavelength of excited atoms or molecules is the same as or different from the wavelength of the excited radiation in the de-excitation process, wherein the fluorescence wavelength information comprises the wavelength information of fluorescence used for assisting shooting when the video information of the microscope or the information of the microscope image is shot; the temperature information comprises temperature information of a solution in which the target object is positioned on the objective table when the microscopic sub-video information or the microscopic image information of the target object is acquired; the humidity information comprises humidity information of a solution in which the target object is positioned on the objective table when the microscopic sub video information or the microscopic image information of the target object is acquired; the PH value information comprises PH value information of a solution in which the target object is positioned on the objective table when the microscopic sub-video information or the microscopic image information of the target object is acquired.
Fig. 2 illustrates a method for presenting microscopic sub-video information of a target object, where the method is applied to a computing device, and the computing device includes a terminal and a cloud end, and the method includes:
the method comprises the steps that a terminal sends a microscopic sub-video request about a target object to a cloud, wherein the microscopic sub-video request comprises identification information of the target object;
the cloud end receives the micro sub video request, and determines a plurality of pieces of micro sub video information of the target object according to the identification information of the target object, wherein the target object comprises a plurality of sub areas, each sub area corresponds to at least one piece of micro sub video information, and the micro sub video information comprises a plurality of pieces of micro image information based on a time sequence;
the cloud returns the multiple pieces of microscopic sub-video information of the target object to the terminal;
and the terminal receives and displays the microscopic sub-video information corresponding to the plurality of sub-areas through a display device, wherein the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same moment in the display process.
In some embodiments, the computing device includes a terminal and a cloud, and when the plurality of pieces of display sub-video information are presented at the terminal, the cloud is configured to obtain the plurality of pieces of display sub-video information and send the plurality of pieces of display sub-video information to the terminal. A one-to-one module 101 included in the terminal includes a one-to-one unit 1011 (not shown) and a one-to-two unit 1012 (not shown), the one-to-one unit 1011 is configured to send a microscopic sub video request about a target object to a cloud end, where the microscopic sub video request includes identification information of the target object; a second unit 1012, configured to receive multiple pieces of microscopic sub-video information, which is returned by the cloud and is related to the target object, where the target object includes multiple sub-areas, and each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes multiple pieces of microscopic image information based on a time sequence. For example, the terminal sends a microscopic sub-video request about a target object to the cloud, where the microscopic sub-video request includes identification information about the target object, where the identification information includes an identifier for determining corresponding three-dimensional microscopic video information, and the like, including but not limited to microscopic image sequence information of the target object, a key field of the target object, image information of the target object, microscopic record information of the target object, and unique identification code information of the target object. The cloud end stores the corresponding relation between the identification information of the target object and the microscopic image information or the plurality of microscopic sub-video information of the target object, determines the corresponding microscopic sub-video information based on the identification information of the target object uploaded by the user equipment, or determines the corresponding microscopic image information and determines the corresponding plurality of microscopic sub-video information based on the microscopic image information, then returns the plurality of microscopic sub-video information to the terminal, and the terminal receives and presents the microscopic sub-video information.
In some embodiments, the microscope sub-video request further includes microscope parameter information of the plurality of microscope sub-video information, where a one-to-two unit 1012 is configured to receive the plurality of microscope sub-video information that is returned by the cloud and conforms to the microscope parameter information about the target object, where the target object includes a plurality of sub-areas, each sub-area corresponding to at least one microscope sub-video information, and the microscope sub-video information includes a plurality of microscope image information based on a time sequence. For example, the terminal sends identification information about a target object to a cloud, wherein the identification information includes identifiers and the like for determining corresponding multiple pieces of microscopic sub-video information, including but not limited to key fields of the target object; image information of the target object; microscopic recording information of the target object; unique identification code information of the target object; a plurality of microscopic image information of a plurality of sub-regions of the target object, and the like. The cloud end stores the corresponding relation between the identification information of the target object and the microscopic sub-video information or the microscopic image information of the target object, determines the corresponding multiple pieces of microscopic sub-video information or determines the corresponding microscopic image information based on the identification information of the target object uploaded by the terminal, and then returns the microscopic sub-video information to the terminal, and the terminal receives and presents the multiple pieces of microscopic sub-video information.
In some embodiments, the one-to-two unit 1012 is configured to receive access link information, returned by the cloud, of a plurality of pieces of microscopic sub video information about the target object, where the target object includes a plurality of sub areas, each sub area corresponding to at least one piece of microscopic sub video information, and the microscopic sub video information includes a plurality of pieces of microscopic image information based on a time sequence; the second module 102 is configured to access a corresponding web page according to the access link information, and present microscopic sub-video information corresponding to the multiple sub-areas through a display device, where the microscopic sub-video information corresponding to each sub-area corresponds to a same time node in the time sequence at a same time in the presentation process. For example, after the cloud determines the three-dimensional microscopic video information corresponding to the target object, the cloud does not directly return the corresponding multiple pieces of microscopic video information to the terminal, but generates a webpage corresponding to the three-dimensional microscopic video information, returns access link information corresponding to the webpage to the terminal, receives the access link information and enters the corresponding webpage through the access link information, requests the corresponding multiple pieces of microscopic video information from the cloud, and presents the multiple pieces of microscopic video information in the webpage.
In some embodiments, the identification information of the target object includes, but is not limited to: a key field of the target object; image information of the target object; microscopic recording information of the target object; unique identification code information of the target object; a plurality of microscopic image information of a plurality of sub-regions of the target object. For example, the identification information includes an identifier or the like for determining corresponding microscopic video information, including but not limited to a key field of the target object, such as a name of the target object or a keyword for searching the target object extracted from the name of the target object; the identification information comprises microscopic record information of the target object, such as historical records of microscopic image information or microscopic sub-video information of the target object, which are uploaded or searched by a user in an application; the unique identification code information of the target object, such as a unique identification code set in an application of the target object, and the like; the identification information may include a plurality of microscopic image information of the plurality of sub-regions of the target object, for example, a user directly sends the plurality of microscopic image information of the plurality of sub-regions of the target object to a cloud, the cloud generates a plurality of corresponding microscopic sub-video information, and returns the information to the terminal.
In some embodiments, the microscopic sub-video request includes identification information of at least one sub-region of a plurality of sub-regions of the target object; a one-to-two unit 1012, configured to receive microscopic sub-video information returned by the cloud and corresponding to at least one sub-region in a plurality of sub-regions of the target object, where each sub-region corresponds to at least one microscopic sub-video information, and the microscopic sub-video information includes a plurality of microscopic image information based on a time sequence; the second module 102 is configured to present, by using a display device, microscopic sub-video information corresponding to at least one of the multiple sub-areas, where the microscopic sub-video information corresponding to each of the at least one sub-area corresponds to a same time node in the time sequence at a same time in a presentation process. For example, when a user studies microscopic data of a target object, only one or more sub-regions are interested, the terminal sends identification information of the at least one sub-region to the cloud based on the operation of the user, the cloud determines corresponding at least one piece of microscopic sub-video information based on the identification information of the at least one sub-region of the target object uploaded by the terminal, or determines microscopic image information of the corresponding at least one sub-region and determines corresponding at least one piece of microscopic sub-video information based on the microscopic image information, then, the cloud returns the at least one piece of microscopic sub-video information to the terminal, and the terminal receives and presents the at least one piece of microscopic sub-video information. In some embodiments, the identification information of at least one of the plurality of sub-regions of the target object includes, but is not limited to: image information of at least one subregion of a plurality of subregions of the target object; microscopic recording information of at least one sub-region of a plurality of sub-regions of the target object; a plurality of microscopic image information of at least one sub-region of a plurality of sub-regions of the target object; image mark information on at least one of the plurality of sub-regions in the image information of the target object, and the like. For example, the identification information includes an identifier or the like for determining corresponding at least one piece of microscopic sub-video information, including but not limited to a key field of at least one sub-region of the target object, such as a name of at least one sub-region or a keyword or the like extracted from the name of at least one sub-region of the target object for searching the target object; the identification information may also include microscopic recording information of the at least one sub-region, such as a historical record of microscopic image information or microscopic sub-video information about the at least one sub-region, which is uploaded or searched by a user in an application; the unique identification code information of the at least one sub-area, such as a unique identification code set in an application by the at least one sub-area, and the like; the identification information may include microscopic image information of the at least one sub-region, for example, a user directly sends the microscopic image information of the at least one sub-region of the target object to the cloud, and the cloud generates corresponding at least one microscopic sub-video information and returns the information to the terminal.
Fig. 6 illustrates an apparatus for presenting microscopic sub-video information of a target object, wherein the apparatus generally comprises a cloud, and the apparatus comprises a two-in-one module 201, a two-in-two module 202, and a two-in-three module 203, according to an aspect of the present application. A first module 201, configured to receive a microscopic sub-video request sent by a terminal for a target object, where the microscopic sub-video request includes identification information of the target object; a second module 202, configured to determine, according to the identification information of the target object, multiple pieces of microscopic sub-video information of the target object, where the target object includes multiple sub-regions, each sub-region corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes multiple pieces of microscopic image information based on a time sequence; a second and third module 203, configured to return the multiple pieces of microscopic sub-video information of the target object to the terminal. For example, the terminal sends a microscopic sub-video request about a target object to the cloud, where the microscopic sub-video request includes identification information about the target object, where the identification information includes an identifier for determining corresponding microscopic sub-video information and the like, including but not limited to microscopic image sequence information of the target object, a key field of the target object, image information of the target object, microscopic record information of the target object, and unique identification code information of the target object. The cloud end stores the corresponding relation between the identification information of the target object and the microscopic image information or the plurality of microscopic sub-video information of the target object, determines the corresponding microscopic sub-video information based on the identification information of the target object uploaded by the user equipment, or determines the corresponding microscopic image information and determines the corresponding plurality of microscopic sub-video information based on the microscopic image information, then returns the plurality of microscopic sub-video information to the terminal, and the terminal receives and presents the microscopic sub-video information. The terminal sends identification information about a target object to a cloud, wherein the identification information comprises identifiers and the like used for determining corresponding multiple pieces of micro sub video information, including but not limited to key fields of the target object; image information of the target object; microscopic recording information of the target object; unique identification code information of the target object; a plurality of microscopic image information of a plurality of sub-regions of the target object, and the like. The cloud end stores the corresponding relation between the identification information of the target object and the microscopic sub-video information or the microscopic image information of the target object, determines the corresponding multiple pieces of microscopic sub-video information or determines the corresponding microscopic image information based on the identification information of the target object uploaded by the terminal, and then returns the microscopic sub-video information to the terminal, and the terminal receives and presents the multiple pieces of microscopic sub-video information.
In some embodiments, the second module 202 is configured to query, in a microscopic video database, whether a plurality of pieces of microscopic sub-video information of the target object exist according to the identification information of the target object; if the plurality of microscopic image information does not exist, acquiring a plurality of microscopic image information of each sub-region of the plurality of sub-regions of the target object based on the time sequence, and generating microscopic sub-video information corresponding to each sub-region based on the plurality of microscopic image information of each sub-region based on the time sequence. For example, the cloud end has a microscope video database, a plurality of microscope video information of corresponding target objects are stored in the database, after the cloud end receives the identification information of the target objects, whether the corresponding microscope video information exists or not is searched in the microscope video database, and if the corresponding microscope video information exists, the corresponding three-dimensional microscope video information is directly returned; if the micro-image information does not exist, the corresponding micro-image information is further acquired and used for generating the corresponding micro-sub video information and the like.
In some embodiments, the microscopy sub-video request further includes microscopy parameter information for the plurality of microscopy sub-video information; a second module 202, configured to determine, according to the identification information of the target object, multiple pieces of microscopic sub-video information of the target object that meet the microscopic parameter information, where the target object includes multiple sub-regions, each sub-region corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes multiple pieces of microscopic image information based on a time sequence. For example, the microscope sub video information further includes microscope parameter information related to the acquisition of the microscope sub video information, and the microscope parameter information includes, but is not limited to: focal plane height information; objective lens multiple information; lighting light color information; lighting lamp brightness information; fluorescence wavelength information; temperature information; humidity information; PH value information; polarized light angle information; DIC rotation angle information and altitude information, etc. The microscope video request comprises corresponding microscope parameter information, the cloud end returns the corresponding microscope video information to the terminal based on the microscope parameter information, if a user wants to observe microscope data of red blood cells and white blood cells in a blood sample, the red blood cells can be observed by a ten-fold objective lens due to individual difference of the red blood cells and the white blood cells, the white blood cells are observed by a one-hundred-fold objective lens, the computing device sends identification information of the blood sample to the cloud end, the microscope parameter information corresponding to a red blood cell area comprises a ten-fold objective lens multiple of the red blood cells, the microscope parameter information corresponding to a white blood cell area comprises a one-hundred-fold objective lens multiple of the white blood cells, and the like, and the cloud end returns the microscope video information of the ten-fold objective lens related to a subregion of the red blood cells and the microscope video information related to a one-fold objective lens related to a subregion of the white blood cells according to the microscope video request.
In some embodiments, the microscopic sub-video request includes identification information of at least one sub-region of a plurality of sub-regions of the target object; the second module 202 is configured to determine microscopic sub-video information of at least one sub-region of the multiple sub-regions of the target object according to the identification information of the at least one sub-region, where each sub-region corresponds to at least one microscopic sub-video information, and the microscopic sub-video information includes multiple pieces of microscopic image information based on a time sequence. For example, when a user studies microscopic data of a target object, only one or more sub-regions are interested, the terminal sends identification information of the at least one sub-region to the cloud based on the operation of the user, the cloud determines corresponding at least one piece of microscopic sub-video information based on the identification information of the at least one sub-region of the target object uploaded by the terminal, or determines microscopic image information of the corresponding at least one sub-region and determines corresponding at least one piece of microscopic sub-video information based on the microscopic image information, then, the cloud returns the at least one piece of microscopic sub-video information to the terminal, and the terminal receives and presents the at least one piece of microscopic sub-video information. In some embodiments, the identification information of at least one of the plurality of sub-regions of the target object includes, but is not limited to: image information of at least one subregion of a plurality of subregions of the target object; microscopic recording information of at least one sub-region of a plurality of sub-regions of the target object; a plurality of microscopic image information of at least one sub-region of a plurality of sub-regions of the target object; image mark information on at least one of the plurality of sub-regions in the image information of the target object, and the like. For example, the identification information includes an identifier or the like for determining corresponding at least one piece of microscopic sub-video information, including but not limited to a key field of at least one sub-region of the target object, such as a name of at least one sub-region or a keyword or the like extracted from the name of at least one sub-region of the target object for searching the target object; the identification information may also include microscopic recording information of the at least one sub-region, such as a historical record of microscopic image information or microscopic sub-video information about the at least one sub-region, which is uploaded or searched by a user in an application; the unique identification code information of the at least one sub-area, such as a unique identification code set in an application by the at least one sub-area, and the like; the identification information may include microscopic image information of the at least one sub-region, for example, a user directly sends the microscopic image information of the at least one sub-region of the target object to the cloud, and the cloud generates corresponding at least one microscopic sub-video information and returns the information to the terminal.
In addition to the methods and apparatus described in the embodiments above, the present application also provides a computer readable storage medium storing computer code that, when executed, performs the method as described in any of the preceding claims.
The present application also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 7 illustrates an exemplary system that can be used to implement the various embodiments described herein;
in some embodiments, as shown in FIG. 7, the system 300 can be implemented as any of the above-described devices in the various embodiments. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 310 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
System memory 315 may be used, for example, to load and store data and/or instructions for system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 315 may include a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 320 may be accessible over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310 to form a system on a chip (SoC).
In various embodiments, system 300 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Computer-readable media herein can be any available computer-readable storage media or communication media that can be accessed by a computer.
Communication media includes media by which communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied in a modulated data signal, for example, in a wireless medium such as a carrier wave or similar mechanism such as is embodied as part of spread spectrum techniques. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed that can store computer-readable information/data for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (24)

1. A method of presenting microscopic sub-video information of a target object, wherein the method comprises:
acquiring a plurality of pieces of microscopic sub-video information about a target object, wherein the target object comprises a plurality of sub-areas, each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information comprises a plurality of pieces of microscopic image information based on a time sequence;
and displaying the microscopic sub-video information corresponding to the plurality of sub-areas through a display device, wherein the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same moment in the displaying process.
2. The method of claim 1, wherein the microscopic sub-video information is generated from a plurality of microscopic image information of the corresponding sub-region based on the time series.
3. The method of claim 2, wherein the method further comprises:
and arranging the plurality of pieces of microscopic image information according to the time sequence of the plurality of pieces of microscopic image information and a preset time sequence to generate microscopic sub-video information corresponding to the sub-areas.
4. The method of any of claims 1-3, wherein the micro sub video information comprises at least any of:
two-dimensional microscopic sub-video information;
three-dimensional microscopic sub-video information.
5. The method of any of claims 1-4, wherein the method further comprises:
generating corresponding regulation and control instructions based on the regulation and control operation of the user on the plurality of pieces of micro sub video information;
wherein, the presenting, by the display device, the microscopic sub-video information corresponding to the plurality of sub-areas, where the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same time in the presentation process, includes:
and displaying the plurality of microscopic sub-video information through a display device according to the regulating instruction, wherein the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same moment in the displaying process.
6. The method of claim 5, wherein the regulatory instructions comprise at least any one of:
presenting at least one piece of micro sub video information in the plurality of pieces of micro sub video information;
zooming in or zooming out at least one of the plurality of microscopic sub-video information;
pausing at least one of the plurality of microscopic sub video information;
selecting a first time interval in the time sequence corresponding to the plurality of pieces of micro sub video information for presentation;
and adjusting the presenting position of at least one of the plurality of pieces of micro sub video information.
7. The method of claim 5 or 6, wherein the micro sub video information comprises three-dimensional micro sub video information; wherein the regulatory instructions comprise at least any one of:
adjusting a presentation visual angle of at least one piece of micro sub video information in the plurality of pieces of micro sub video information;
and scrolling and presenting at least one of the plurality of micro sub video information.
8. The method of any of claims 5 to 7, wherein each of the plurality of sub-regions corresponds to a plurality of microscopic sub-video information corresponding to at least one different microscopic parameter information; wherein the control instruction comprises switching microscopic sub-video information corresponding to at least one sub-region of the plurality of sub-regions based on corresponding microscopic parameter information.
9. The method of claim 8, wherein the microscopic parameter information comprises at least any one of:
focal plane height information;
objective lens multiple information;
lighting light color information;
lighting lamp brightness information;
fluorescence wavelength information;
temperature information;
humidity information;
PH value information;
polarized light angle information;
DIC rotation angle information and altitude information.
10. The method according to any one of claims 1 to 9, wherein the acquiring a plurality of microscopic sub video information about a target object, wherein the target object comprises a plurality of sub areas, each sub area corresponding to at least one microscopic sub video information, the microscopic sub video information comprising a plurality of microscopic image information based on a time sequence comprises:
sending a microscopic sub-video request about a target object to a cloud, wherein the microscopic sub-video request comprises identification information of the target object;
receiving a plurality of pieces of microscopic sub-video information which is returned by the cloud and relates to the target object, wherein the target object comprises a plurality of sub-areas, each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information comprises a plurality of pieces of microscopic image information based on a time sequence.
11. The method of claim 10, wherein the display sub video request further comprises microscopic parameter information of the plurality of display sub video information;
the receiving of the multiple pieces of microscopic sub-video information, which is returned by the cloud and relates to the target object, wherein the target object includes multiple sub-areas, each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes multiple pieces of microscopic image information based on a time sequence, including:
receiving a plurality of pieces of microscopic sub-video information which is returned by the cloud and accords with the microscopic parameter information and is related to the target object, wherein the target object comprises a plurality of sub-areas, each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information comprises a plurality of pieces of microscopic image information based on a time sequence.
12. The method of claim 10, wherein the receiving of the cloud-returned plurality of microscopic sub-video information about the target object, wherein the target object comprises a plurality of sub-regions, each sub-region corresponding to at least one microscopic sub-video information, the microscopic sub-video information comprising a plurality of microscopic image information based on a time sequence comprises:
receiving access link information which is returned by the cloud and relates to a plurality of pieces of microscopic sub-video information of the target object, wherein the target object comprises a plurality of sub-areas, each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information comprises a plurality of pieces of microscopic image information based on a time sequence;
wherein, the presenting, by the display device, the microscopic sub-video information corresponding to the plurality of sub-areas, where the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same time in the presentation process, includes:
and accessing the corresponding webpage according to the access link information, and presenting the microscopic sub-video information corresponding to the plurality of sub-areas through a display device, wherein the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same time in the presentation process.
13. The method of any of claims 10 to 12, wherein the identification information of the target object comprises at least any one of:
a key field of the target object;
image information of the target object;
microscopic recording information of the target object;
unique identification code information of the target object;
and a plurality of microscopic image information corresponding to the plurality of sub-regions of the target object.
14. The method of claim 10, wherein the request for a child micro video includes identification information of at least one of a plurality of sub-regions of the target object;
the receiving of the multiple pieces of microscopic sub-video information, which is returned by the cloud and relates to the target object, wherein the target object includes multiple sub-areas, each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes multiple pieces of microscopic image information based on a time sequence, including:
receiving microscopic sub-video information returned by the cloud end and corresponding to at least one sub-region in a plurality of sub-regions of the target object, wherein each sub-region corresponds to at least one microscopic sub-video information, and the microscopic sub-video information comprises a plurality of microscopic image information based on a time sequence;
wherein, the presenting, by the display device, the microscopic sub-video information corresponding to the plurality of sub-areas, where the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same time in the presentation process, includes:
and presenting the microscopic sub-video information corresponding to at least one sub-area in the plurality of sub-areas through a display device, wherein the microscopic sub-video information corresponding to each sub-area in the at least one sub-area corresponds to the same time node in the time sequence at the same moment in the presentation process.
15. The method of claim 14, wherein the identification information of at least one of the plurality of sub-regions of the target object comprises at least any one of:
image information of at least one subregion of a plurality of subregions of the target object;
microscopic recording information of at least one sub-region of a plurality of sub-regions of the target object;
a plurality of microscopic image information of at least one sub-region of a plurality of sub-regions of the target object;
image marking information on at least one of the plurality of sub-regions in the image information of the target object.
16. A method for presenting microscopic sub-video information of a target object is applied to a cloud end, wherein the method comprises the following steps:
a receiving terminal sends a microscopic sub-video request about a target object, wherein the microscopic sub-video request comprises identification information of the target object;
determining a plurality of pieces of microscopic sub-video information of the target object according to the identification information of the target object, wherein the target object comprises a plurality of sub-areas, each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information comprises a plurality of pieces of microscopic image information based on a time sequence;
and returning the plurality of pieces of micro sub video information of the target object to the terminal.
17. The method according to claim 16, wherein the determining a plurality of pieces of microscopic sub-video information of the target object according to the identification information of the target object, wherein the target object includes a plurality of sub-regions, each sub-region corresponding to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes a plurality of pieces of microscopic image information based on a time sequence, includes:
inquiring whether a plurality of pieces of micro sub video information of the target object exist in a micro video database according to the identification information of the target object;
if the plurality of microscopic image information does not exist, acquiring a plurality of microscopic image information of each sub-region of the plurality of sub-regions of the target object based on the time sequence, and generating microscopic sub-video information corresponding to each sub-region based on the plurality of microscopic image information of each sub-region based on the time sequence.
18. The method of claim 16, wherein the display sub video request further comprises microscopic parameter information of the plurality of display sub video information;
the determining, according to the identification information of the target object, a plurality of pieces of microscopic sub-video information of the target object, where the target object includes a plurality of sub-regions, each sub-region corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes a plurality of pieces of microscopic image information based on a time sequence, includes:
determining a plurality of pieces of microscopic sub-video information of the target object according to the identification information of the target object, wherein the plurality of microscopic sub-video information of the target object conforms to the microscopic parameter information, the target object comprises a plurality of sub-areas, each sub-area corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information comprises a plurality of pieces of microscopic image information based on a time sequence.
19. The method of claim 16, wherein the request for a child micro video includes identification information of at least one of a plurality of sub-regions of the target object;
the determining, according to the identification information of the target object, a plurality of pieces of microscopic sub-video information of the target object, where the target object includes a plurality of sub-regions, each sub-region corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes a plurality of pieces of microscopic image information based on a time sequence, includes:
determining microscopic sub-video information of at least one sub-region in the plurality of sub-regions of the target object according to the identification information of the at least one sub-region, wherein each sub-region corresponds to at least one microscopic sub-video information, and the microscopic sub-video information comprises a plurality of microscopic image information based on a time sequence.
20. A method of presenting microscopic sub-video information of a target object, wherein the method comprises:
the method comprises the steps that a terminal sends a microscopic sub-video request about a target object to a cloud, wherein the microscopic sub-video request comprises identification information of the target object;
the cloud end receives the micro sub video request, and determines a plurality of pieces of micro sub video information of the target object according to the identification information of the target object, wherein the target object comprises a plurality of sub areas, each sub area corresponds to at least one piece of micro sub video information, and the micro sub video information comprises a plurality of pieces of micro image information based on a time sequence;
the cloud returns the multiple pieces of microscopic sub-video information of the target object to the terminal;
and the terminal receives and displays the microscopic sub-video information corresponding to the plurality of sub-areas through a display device, wherein the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at the same moment in the display process.
21. An apparatus for presenting microscopic sub-video information of a target object, wherein the apparatus comprises:
a one-to-one module, configured to acquire multiple pieces of microscopic sub-video information regarding a target object, where the target object includes multiple sub-regions, each sub-region corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes multiple pieces of microscopic image information based on a time sequence;
and the second module is used for presenting the microscopic sub-video information corresponding to the plurality of sub-areas through a display device, wherein the microscopic sub-video information corresponding to each sub-area corresponds to the same time node in the time sequence at each same moment in the presentation process.
22. An apparatus for presenting microscopic sub-video information of a target object, wherein the apparatus comprises:
a first module, a second module, a third module, a fourth module and a fourth module, wherein the first module is used for receiving a microscopic sub video request about a target object sent by a terminal, and the microscopic sub video request comprises identification information of the target object;
a second module, configured to determine, according to the identification information of the target object, multiple pieces of microscopic sub-video information of the target object, where the target object includes multiple sub-regions, each sub-region corresponds to at least one piece of microscopic sub-video information, and the microscopic sub-video information includes multiple pieces of microscopic image information based on a time sequence;
and the second module and the third module are used for returning the plurality of pieces of microscopic sub-video information of the target object to the terminal.
23. An apparatus for presenting microscopic sub-video information of a target object, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of the method of any of claims 1 to 19.
24. A computer-readable medium storing instructions that, when executed, cause a system to perform the operations of any of the methods of claims 1-19.
CN202010172172.2A 2020-03-12 2020-03-12 Method and equipment for presenting microscopic sub-video information of target object Pending CN113395484A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010172172.2A CN113395484A (en) 2020-03-12 2020-03-12 Method and equipment for presenting microscopic sub-video information of target object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010172172.2A CN113395484A (en) 2020-03-12 2020-03-12 Method and equipment for presenting microscopic sub-video information of target object

Publications (1)

Publication Number Publication Date
CN113395484A true CN113395484A (en) 2021-09-14

Family

ID=77616626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010172172.2A Pending CN113395484A (en) 2020-03-12 2020-03-12 Method and equipment for presenting microscopic sub-video information of target object

Country Status (1)

Country Link
CN (1) CN113395484A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1918751A1 (en) * 2006-10-31 2008-05-07 Olympus Corporation Microscope system, observation method and observation program
US20100103253A1 (en) * 2006-09-06 2010-04-29 Leica Microsystems Cms Gmbh Method and microscopic system for scanning a sample
US20110316999A1 (en) * 2010-06-21 2011-12-29 Olympus Corporation Microscope apparatus and image acquisition method
JP2012190033A (en) * 2012-05-07 2012-10-04 Olympus Corp Microscopic photographing device and microscopic photographing device control method
JP2012190028A (en) * 2012-04-27 2012-10-04 Olympus Corp Microscope system
US20140043462A1 (en) * 2012-02-10 2014-02-13 Inscopix, Inc. Systems and methods for distributed video microscopy
US20170212342A1 (en) * 2013-11-28 2017-07-27 Femtonics Kft. Optical microscope system for simultaneous observation of spatially distinct regions of interest

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100103253A1 (en) * 2006-09-06 2010-04-29 Leica Microsystems Cms Gmbh Method and microscopic system for scanning a sample
EP1918751A1 (en) * 2006-10-31 2008-05-07 Olympus Corporation Microscope system, observation method and observation program
US20110316999A1 (en) * 2010-06-21 2011-12-29 Olympus Corporation Microscope apparatus and image acquisition method
US20140043462A1 (en) * 2012-02-10 2014-02-13 Inscopix, Inc. Systems and methods for distributed video microscopy
JP2012190028A (en) * 2012-04-27 2012-10-04 Olympus Corp Microscope system
JP2012190033A (en) * 2012-05-07 2012-10-04 Olympus Corp Microscopic photographing device and microscopic photographing device control method
US20170212342A1 (en) * 2013-11-28 2017-07-27 Femtonics Kft. Optical microscope system for simultaneous observation of spatially distinct regions of interest

Similar Documents

Publication Publication Date Title
CN109308469B (en) Method and apparatus for generating information
JP5679524B2 (en) Method and apparatus for navigating stacked microscopic images
US20230244362A1 (en) Display method, apparatus, device and storage medium
JP2017063414A (en) Method and apparatus for generating data representative of light field
CN110751735A (en) Remote guidance method and device based on augmented reality
CN109151312A (en) Focusing method and device and video exhibition stand
CN112783398A (en) Display control and interaction control method, device, system and storage medium
CN114332417A (en) Method, device, storage medium and program product for multi-person scene interaction
CN112311966A (en) Method and device for manufacturing dynamic lens in short video
CN117523062B (en) Method, device, equipment and storage medium for previewing illumination effect
CN112822419A (en) Method and equipment for generating video information
CN113490063A (en) Method, device, medium and program product for live broadcast interaction
CN113470167B (en) Method and device for presenting three-dimensional microscopic image
CN113395484A (en) Method and equipment for presenting microscopic sub-video information of target object
CN113393407B (en) Method and device for acquiring microscopic image information of sample
CN113392675B (en) Method and equipment for presenting microscopic video information
US20140218355A1 (en) Mapping content directly to a 3d geometric playback surface
CN113395509B (en) Method and apparatus for providing and presenting three-dimensional microscopic video information of a target object
CN113395483B (en) Method and device for presenting multiple microscopic sub-video information
CN113392267B (en) Method and device for generating two-dimensional microscopic video information of target object
CN113470185B (en) Method and equipment for presenting three-dimensional microscopic image
CN111314547B (en) Method, equipment and computer readable medium for displaying presentation information on reading page
CN113392674A (en) Method and equipment for regulating and controlling microscopic video information
CN113469865B (en) Method and equipment for acquiring microscopic image
US10542309B2 (en) Electronic device and operation method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210914