CN113395483B - Method and device for presenting multiple microscopic sub-video information - Google Patents

Method and device for presenting multiple microscopic sub-video information Download PDF

Info

Publication number
CN113395483B
CN113395483B CN202010171426.9A CN202010171426A CN113395483B CN 113395483 B CN113395483 B CN 113395483B CN 202010171426 A CN202010171426 A CN 202010171426A CN 113395483 B CN113395483 B CN 113395483B
Authority
CN
China
Prior art keywords
microscopic
sub
information
video information
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010171426.9A
Other languages
Chinese (zh)
Other versions
CN113395483A (en
Inventor
张大庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pinghu Laidun Optical Instrument Manufacturing Co ltd
Original Assignee
Pinghu Laidun Optical Instrument Manufacturing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pinghu Laidun Optical Instrument Manufacturing Co ltd filed Critical Pinghu Laidun Optical Instrument Manufacturing Co ltd
Priority to CN202010171426.9A priority Critical patent/CN113395483B/en
Publication of CN113395483A publication Critical patent/CN113395483A/en
Application granted granted Critical
Publication of CN113395483B publication Critical patent/CN113395483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/361Optical details, e.g. image relay to the camera or image sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

The object of the present application is to provide a method for presenting a plurality of microscopic sub-video information, in particular comprising: acquiring a plurality of first microscopic sub-video information, wherein each first microscopic sub-video information comprises a plurality of microscopic image information of a corresponding sub-region about a first microscopic parameter sequence, and the first microscopic parameter sequence comprises a plurality of assignments corresponding to the first microscopic parameters; and synchronously presenting the plurality of first microscopic sub-video information according to the first microscopic parameter sequence. According to the method and the device, the microscopic sub-video information of the subarea is acquired, the plurality of microscopic sub-video information is presented based on the first microscopic parameter sequence, and comparison is carried out through synchronous presentation of the plurality of microscopic sub-video information, so that microscopic data research is more comprehensive and careful, a brand-new microscopic observation effect is realized, and the use experience of a user is greatly improved.

Description

Method and device for presenting multiple microscopic sub-video information
Technical Field
The present application relates to the field of communications, and more particularly to a technique for presenting a plurality of microscopic sub-video information.
Background
Microscopic optical imaging, also commonly referred to as "optical Microscopy", or "optical Microscopy" (Optical Microscopy, or Light Microscopy), refers to a technique whereby an enlarged image of a tiny sample can be obtained after passing through or reflecting visible Light back from the sample through one or more lenses. The obtained image can be directly observed by eyes through an ocular lens, can be recorded by a photosensitive plate or a digital image detector such as CCD (charge coupled device) and CMOS (complementary metal oxide semiconductor), and can be displayed and analyzed on a computer. Of course, by combining with the image pickup device, it is also possible to record a video or the like concerning the sample in the field of view. However, the scope of the field of view that can be observed by a microscope is limited, and when the size of the observed sample exceeds the current scope of the field of view, only the condition of the sample in the current field of view can be observed at the same time.
Disclosure of Invention
It is an object of the present application to provide a method and apparatus for presenting a plurality of microscopic sub-video information.
According to one aspect of the present application, there is provided a method for presenting a plurality of microscopic sub-video information, the method comprising:
acquiring a plurality of first microscopic sub-video information, wherein each first microscopic sub-video information comprises a plurality of microscopic image information of a corresponding sub-region about a first microscopic parameter sequence, and the first microscopic parameter sequence comprises a plurality of assignments corresponding to the first microscopic parameters;
and synchronously presenting the plurality of first microscopic sub-video information according to the first microscopic parameter sequence.
According to another aspect of the present application, there is provided a method for presenting a plurality of microscopic sub-video information, applied to a network device side, the method comprising:
receiving a microscopic sub-video request sent by user equipment, wherein the microscopic sub-video request comprises identification information of a sub-region and the first microscopic parameter;
determining a plurality of corresponding first microscopic sub-video information according to the identification information of the subarea and the first microscopic parameters, wherein each first microscopic sub-video information comprises a plurality of microscopic image information of the corresponding subarea about a first microscopic parameter sequence, and the first microscopic parameter sequence comprises a plurality of assignments corresponding to the first microscopic parameters;
And returning the plurality of first microscopic sub-video information to the user equipment.
According to one aspect of the present application, there is provided a method for presenting a plurality of microscopic sub-video information, wherein the method comprises:
the user equipment sends microscopic sub-video requests about a plurality of subareas to network equipment, wherein the microscopic sub-video requests comprise identification information of the subareas and the first microscopic parameter information;
the network equipment receives the microscopic sub-video request, and determines a plurality of corresponding first microscopic sub-video information according to the identification information of the subarea and the first microscopic parameter information, wherein each first microscopic sub-video information comprises a plurality of microscopic image information of the corresponding subarea about a first microscopic parameter sequence, and the first microscopic parameter sequence comprises a plurality of assignments of the first microscopic parameters;
the network equipment returns the first microscopic sub-video information to the user equipment;
and the user equipment receives the plurality of first microscopic sub-video information returned by the cloud and synchronously presents the plurality of first microscopic sub-video information according to the first microscopic parameter sequence.
According to one aspect of the present application, there is provided an apparatus for presenting a plurality of microscopic sub-video information, wherein the apparatus comprises:
the system comprises a one-to-one module, a first microscopic sub-video module and a second microscopic sub-video module, wherein the one-to-one module is used for acquiring a plurality of first microscopic sub-video information, each first microscopic sub-video information comprises a plurality of microscopic image information of a corresponding sub-region relative to a first microscopic parameter sequence, and the first microscopic parameter sequence comprises a plurality of assignments corresponding to the first microscopic parameters;
and the second module is used for synchronously presenting the plurality of first microscopic sub-video information according to the first microscopic parameter sequence.
According to another aspect of the present application, there is provided a network device for presenting a plurality of microscopic sub-video information, wherein the device comprises:
the second module is used for receiving a microscopic sub-video request sent by user equipment, wherein the microscopic sub-video request comprises identification information of a sub-region and the first microscopic parameter;
the second module is used for determining a plurality of corresponding first microscopic sub-video information according to the identification information of the subarea and the first microscopic parameters, wherein each first microscopic sub-video information comprises a plurality of microscopic image information of the corresponding subarea about a first microscopic parameter sequence, and the first microscopic parameter sequence comprises a plurality of assignments corresponding to the first microscopic parameters;
And the second and third modules are used for returning the plurality of first microscopic sub-video information to the user equipment.
According to one aspect of the present application, there is provided an apparatus for presenting a plurality of microscopic sub-video information, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the operations of any of the methods described above.
According to one aspect of the present application, there is provided a computer readable medium storing instructions that, when executed, cause a system to perform the operations of any of the methods described above.
Compared with the prior art, the method and the device have the advantages that a plurality of first microscopic sub-video information is obtained, wherein each first microscopic sub-video information comprises a plurality of microscopic image information of a corresponding sub-region relative to a first microscopic parameter sequence, and the first microscopic parameter sequence comprises a plurality of assignments corresponding to the first microscopic parameters; and synchronously presenting the plurality of first microscopic sub-video information according to the first microscopic parameter sequence. According to the method and the device, the microscopic sub-video information of the subarea is acquired, the plurality of microscopic sub-video information is presented based on the first microscopic parameter sequence, and comparison is carried out through synchronous presentation of the plurality of microscopic sub-video information, so that microscopic data research is more comprehensive and careful, a brand-new microscopic observation effect is realized, and the use experience of a user is greatly improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
FIG. 1 illustrates a flow chart of a method for presenting a plurality of microscopic sub-video information according to one embodiment of the present application;
fig. 2 shows substep S1011 and substep S1012 in step S101 in fig. 1;
FIG. 3 illustrates a system method diagram for presenting a plurality of microscopic sub-video information according to one embodiment of the present application;
FIG. 4 shows a flowchart of a method for presenting a plurality of microscopic sub-video information applied to the network device of FIG. 3;
FIG. 5 illustrates functional blocks of an apparatus for presenting a plurality of microscopic sub-video information according to one embodiment of the present application;
FIG. 6 illustrates functional blocks of a network device for presenting a plurality of microscopic sub-video information according to another embodiment of the present application;
FIG. 7 illustrates an exemplary system that can be used to implement various embodiments described herein.
The same or similar reference numbers in the drawings refer to the same or similar parts.
Detailed Description
The present application is described in further detail below with reference to the accompanying drawings.
In one typical configuration of the present application, the terminal, the devices of the services network, and the trusted party each include one or more processors (e.g., central processing units (Central Processing Unit, CPU)), input/output interfaces, network interfaces, and memory.
The Memory may include non-volatile Memory in a computer readable medium, random access Memory (Random Access Memory, RAM) and/or non-volatile Memory, etc., such as Read Only Memory (ROM) or Flash Memory (Flash Memory). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase-Change Memory (PCM), programmable Random Access Memory (Programmable Random Access Memory, PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (Dynamic Random Access Memory, DRAM), other types of Random Access Memory (RAM), read-Only Memory (ROM), electrically erasable programmable read-Only Memory (EEPROM), flash Memory or other Memory technology, read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), digital versatile disks (Digital Versatile Disc, DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device.
The device referred to in the present application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product which can perform man-machine interaction with a user (for example, perform man-machine interaction through a touch pad), such as a smart phone, a tablet computer and the like, and the mobile electronic product can adopt any operating system, such as an Android operating system, an iOS operating system and the like. The network device includes an electronic device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable Gate Array, FPGA), a digital signal processor (Digital Signal Processor, DSP), an embedded device, and the like. The network device includes, but is not limited to, a computer, a network host, a single network server, a plurality of network server sets, or a cloud of servers; here, the Cloud is composed of a large number of computers or network servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, a virtual supercomputer composed of a group of loosely coupled computer sets. Including but not limited to the internet, wide area networks, metropolitan area networks, local area networks, VPN networks, wireless Ad Hoc networks (Ad Hoc networks), and the like. Preferably, the device may be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the above-described devices are merely examples, and that other devices now known or hereafter may be present as appropriate for the application, are intended to be within the scope of the present application and are incorporated herein by reference.
In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Fig. 1 illustrates a method for presenting a plurality of microscopic sub-video information, wherein the method is generally applicable to a computing device, the method comprising steps S101 and S102, according to one aspect of the present application. In step S101, the apparatus acquires a plurality of first microscopic sub-video information, where each first microscopic sub-video information includes a plurality of microscopic image information of a corresponding sub-region with respect to a first microscopic parameter sequence, and the first microscopic parameter sequence includes a plurality of assignments corresponding to the first microscopic parameter; in step S102, the apparatus synchronously presents the plurality of first microscopic sub-video information according to the first microscopic parameter sequence. The method is applicable to a computing device, wherein the computing device comprises but is not limited to a user device, a network device, or a device formed by integrating the user device and the network device through a network, the user device comprises but is not limited to any terminal capable of performing man-machine interaction with a user (such as performing man-machine interaction through a touch pad), and the network device comprises but is not limited to a computer, a network host, a single network server, a cloud formed by a plurality of network servers or a plurality of servers. The method and the device can realize the same-screen presentation of a plurality of microscopic sub-video information of different target objects, can construct the same-screen comparison environment of the sub-areas of the different target objects, and can regulate and control the sub-areas of the different target objects based on the same-screen presentation, so that the comparison effect is more autonomous, and a good microscopic data research environment is created for the user.
Specifically, in step S101, the apparatus acquires a plurality of first microscopic sub-video information, wherein each first microscopic sub-video information includes a plurality of microscopic image information of a corresponding sub-region with respect to a first microscopic parameter sequence, and the first microscopic parameter sequence includes a plurality of assignments corresponding to the first microscopic parameters. For example, the first microscopic sub-video information includes microscopic video information generated from a plurality of microscopic image information of the sub-region in accordance with a first sequence of microscopic parameters and specific video parameters, wherein a first one of the first microscopic sub-video information does not represent any specific order; the microscopic sub-video information may also be different according to the video information generated by the dimensional information of the microscopic image information, for example, the microscopic image information may include two-dimensional microscopic image information and/or three-dimensional microscopic image information, and the corresponding first microscopic sub-video information may include two-dimensional microscopic sub-video information and/or three-dimensional microscopic sub-video information, and in some embodiments, the first microscopic sub-video information may include, but is not limited to: two-dimensional microscopic sub-video information; three-dimensional microscopic sub-video information. The two-dimensional image information may be obtained by combining a high-definition image pickup device (such as a CCD camera) with an optical lens of a microscope, or may be obtained by collecting photographed images of a target object or a sub-region of the target object, or may be obtained by combining the whole of the sub-region with the two-dimensional microscopic image information according to a clearer part of a depth of field of each of a plurality of photographed images of the sub-region; the two-dimensional microscopic sub-video information is generated according to the two-dimensional image information of the sub-region. The three-dimensional image information is three-dimensional microscopic image information with clearer whole of the subarea generated according to clearer parts in depth of field of each shooting image of the plurality of shooting images of the subarea and height information of each pixel point, and a coordinate system of the three-dimensional microscopic image information can be a world coordinate system established by taking the center of the subarea as an origin, wherein the focal plane height information of each shooting image comprises coordinate information of an upper shooting image in a Z-axis direction (moving direction of an objective lens).
The first microscopic sub-video information corresponds to microscopic image information and is shot according to a first microscopic parameter when shooting, the first microscopic parameter sequence comprises a plurality of assignments of the first microscopic parameter, the assignments are determined according to the values of the parameters corresponding to the subareas when the shooting device acquires shooting images, the first microscopic parameter usually changes within a certain range in the shooting process, the corresponding shooting device shoots shooting images related to the subareas in the changing process, the variable value of the first microscopic parameter when shooting is carried out when the time node is shooting of the subareas is recorded, and the variable value is used as the assignment of the first microscopic parameter. In some embodiments, the first microscopic parameters include, but are not limited to: shooting time information; focal plane height information; rotation angle information; pitch angle information; yaw angle information; brightness information of the illuminating lamp; lighting light color information; temperature information; humidity information; PH value information; fluorescence band information; polarized light angle information; DIC rotation angle information. For example, the first microscopic parameter information includes an independent variable parameter that can be used for continuous gradual change in a microscopic system where the target object is located, where the value of the parameter may be a specific value, or a section, for example, a section corresponding to [ T-T0, t+t0], and so on.
For example, the shooting time information includes time information when the shooting device shoots a corresponding shooting image, for example, when a user shoots a corresponding cell culture process through the shooting device, image information about a sub-region of the cell culture is acquired every ten minutes, wherein the corresponding shooting time is recorded time information of each shooting of the shooting image, and the corresponding first microscopic parameter sequence includes a time sequence formed by arranging all acquisition times in a certain order (such as time sequence, etc.).
For example, the focal plane height information includes Z-axis height information (such as Z-axis coordinates) of the focal plane of the objective lens during the Z-axis movement, for example, a user collects related information about a sample in a certain sample solution through an imaging device, the corresponding focal plane height moves from the bottom of the sample solution to the top of the sample solution, microscopic image information of a certain sub-area of the sample solution is collected during the continuous change process, where the corresponding focal plane height information is the height information of the focal plane of the objective lens, and the first microscopic parameter sequence includes a height sequence formed by arranging the collected microscopic image information in a certain order (such as from low height to high or from high to low).
For example, the rotation angle information, the pitch angle information and the yaw angle information include gesture information of a target object in an inertial coordinate system, such as microscopic data when a user acquires a sample about different visual angles of the sample through an imaging device of a microscopic system, the microscopic data gradually moves from a specific euler angle to corresponding other euler angles, microscopic image information about the sample is acquired in a non-stop motion process, wherein the corresponding euler angles include rotation angle information, pitch angle information and yaw angle information, the first microscopic parameter sequence includes an angle sequence formed by arranging euler angles of the acquired microscopic image information in a certain order (such as rotation direction and the like), and the like.
For example, the first microscopic parameter information further includes illumination light brightness information, illumination light color information, temperature information, humidity information, PH value information, fluorescence band information, polarized light angle information, and DIC rotation angle information, a user acquires a photographed image of the target object in a process of gradually changing the first microscopic parameter from one assignment to another assignment through the image pickup device, and determines a plurality of corresponding assignments according to the first microscopic parameter corresponding to the photographed image, so that a corresponding first microscopic parameter sequence is determined according to a specific order of the assignments.
In some embodiments, in step S101, the computing device acquires a plurality of microscopic image information about the first microscopic parameter sequence for each sub-region; and determining first microscopic sub-video information of each sub-region according to the first microscopic parameter sequence and the plurality of microscopic image information of each sub-region in each sub-region. For example, the computing device may capture a plurality of microscopic image information related thereto via a camera of the microscopy system, or receive microscopic image information about a plurality of sub-regions transmitted by other devices. Then, the computing device may determine corresponding microscopic sub-video information according to the sequence of the multiple assignments in the first microscopic parameter sequence and the assignment of the first microscopic parameter corresponding to each microscopic image information, taking the shooting time information as an example of the first microscopic parameter, the computing device may sort the microscopic image information of the sub-area according to the time node corresponding to each microscopic image information and a preset time sequence, and set a certain video parameter (such as playing 30 frames per second), and generate corresponding first microscopic sub-video information, where the preset time sequence may be set by a user or selected from time sequences set by a system, and the corresponding time sequence includes but is not limited to sorting according to the time sequence, sorting according to the time sequence in reverse order, selecting according to a certain time interval according to the time sequence, or selecting a certain time interval (the time interval takes a certain time node as a start time, another time node as an end time interval, and the time interval plays from the start time to the end time), and so on.
In some embodiments, the first microscopic sub-video information may be a microscopic video corresponding to a sub-region in a different target object, or may be a microscopic video corresponding to a different sub-region in the same target object, or may even be a microscopic video corresponding to the same sub-region of the same target object under different second microscopic parameters, where the second microscopic parameters are different from the current first microscopic parameters, for example, the current first microscopic sub-video information uses shooting time information as a playing axis, and the second microscopic parameters of the first microscopic sub-video information of the sub-region are one of other first microscopic parameters (such as shooting height) except for shooting time. For example, when at least two pieces of first microscopic sub-video information exist in the plurality of pieces of first microscopic sub-video information corresponding to the same sub-area, assignment of second microscopic parameters of the at least two pieces of microscopic sub-video information is different, wherein the second microscopic parameters include other first microscopic parameters besides the first microscopic parameters.
In step S102, the apparatus synchronously presents the plurality of first microscopic sub-video information according to the first microscopic parameter sequence. For example, the first microscopic parameter sequence is determined according to a specific sequence of multiple assignments of the first microscopic parameters, where the multiple assignments correspond to shooting moments of corresponding microscopic image information of the first microscopic parameters in a gradual change process, based on the corresponding assignments of each microscopic image information in shooting and the first microscopic parameter sequence information, multiple first microscopic sub-video information can be synchronously presented through a display device, for example, the first microscopic sub-video information is divided into multiple areas (which can be equally divided or can be divided according to the requirement, etc.) in a current display screen, the corresponding multiple first microscopic sub-video information is presented in the multiple areas, and the assignment of the corresponding first microscopic parameter of each first microscopic sub-video information in a playing axis at the same moment is the same, for example, the computing device acquires multiple first microscopic sub-video information, where the multiple first microscopic sub-video information is video information of a sub-area under the focal plane height corresponding to a to B, and when the multiple first microscopic sub-video information is presented in the display screen, the focal plane height corresponding to each first microscopic sub-video information area at each moment is the same.
In some embodiments, the method further comprises step S103 (not shown), in step S103, the computing device generating corresponding regulatory instructions based on a user' S regulatory operation on the plurality of first microscopic sub-video information; in step S102, the computing device synchronously presents the plurality of first microscopic sub-video information based on the regulation command and the first microscopic parameter sequence. For example, the manipulation operations include, but are not limited to, a play mode, a play speed, a presentation view angle, a presentation position, a presentation window size, a parameter switch, a video dimension switch, and a mark information of the plurality of microscopic sub-video information by the user, and the corresponding manipulation instructions include, but are not limited to, a play mode, a play speed, a presentation view angle, a presentation position, a presentation window size, a parameter switch, a video dimension switch, and a mark information of the three-dimensional microscopic video information by the user. The computing device further comprises an input device, which is used for acquiring input information of the user device, such as an input device of a touch pad, a keyboard or a mouse, a touch screen and the like, and the computing device can acquire control operations of touch control, clicking or rolling of a pulley and the like of the user and generate corresponding control instructions. In some embodiments, the regulation instruction is used to adjust a video parameter of the first microscopic sub-video information, and based on the regulation instruction, the computing device may adjust a video parameter corresponding to the first microscopic sub-video information, such as a play mode, a play speed, a presentation view angle of the target object, a presentation position, a presentation window size, and the like. In other embodiments, the regulatory instructions include, but are not limited to: switching the current first microscopic parameter to a third microscopic parameter, wherein the third microscopic parameter comprises one of the other first microscopic parameters except the current first microscopic parameter information; converting video dimension information of the plurality of first microscopic sub-video information, wherein the video dimension information comprises two-dimensional video information and three-dimensional video information; and adding marking information in at least one first microscopic sub-video information in the plurality of first microscopic sub-video information. For example, the regulation and control instruction further comprises switching a play axis of the first microscopic sub-video information of the currently presented sub-region, such as switching the first microscopic parameter to a third microscopic parameter, wherein the third microscopic parameter comprises one of the other first microscopic parameters besides the current first microscopic parameter information; the regulation and control instruction further comprises video dimension information of first microscopic sub-video information of the currently presented subarea, if the current first microscopic sub-video information is two-dimensional microscopic video information, the video dimension information is switched to three-dimensional microscopic video information, and if the current first microscopic sub-video information is three-dimensional microscopic video information, the video dimension information is switched to two-dimensional microscopic video information; the controlling related to the two controlling instructions need to re-acquire the corresponding microscopic sub-video information, for example, in some embodiments, the controlling instructions include switching the current first microscopic parameter to a third microscopic parameter, where the third microscopic parameter includes one of other first microscopic parameters except the current first microscopic parameter; the method further includes step S104 (not shown), in which the computing device obtains a plurality of second microscopic sub-video information according to the regulation command, where each third microscopic sub-video information includes a plurality of microscopic image information of a corresponding sub-region with respect to a third microscopic parameter sequence, each third microscopic sub-video information has a first microscopic sub-video information corresponding to the plurality of first microscopic sub-videos, and the third microscopic parameter sequence includes a plurality of assignments corresponding to the third microscopic parameters; in step S102, the plurality of third microscopic sub-video information is synchronously presented according to the third microscopic parameter sequence. For example, the first microscopic sub-video information corresponds to the first microscopic parameter being shooting time information, based on the operation of the user, the computing device generates microscopic sub-video information corresponding to the focal plane height information by switching the current shooting time information, and then the computing device acquires third microscopic sub-video information corresponding to the focal plane height information, such as receiving the third microscopic sub-video information about the focal plane height information sent by other devices, or shooting microscopic image information about the focal plane height information based on the microscopic system and generating corresponding third microscopic sub-video information, and so on. Subsequently, the computing device synchronously presents a plurality of third microscopic sub-video information according to the third sequence of microscopic parameters.
For example, the regulatory instruction information includes adding marker information to at least one of the plurality of first microscopic sub-video information; in step S102, the computing device synchronously presents the plurality of first microscopic sub-video information according to the first microscopic parameter sequence, and superimposes and presents the marking information in the at least one first microscopic sub-video information; wherein the method comprises a step S105 (not shown), in which step S105 the computing device displays said marker information in a tracking overlay in a subsequent video frame of said at least one first microscopic sub-video information. For example, based on the user's operation, the computing device generates corresponding regulatory instructions for adding marking information to the first microscopic sub-video information of interest to the user, the marking information being used to mark the position of the content of interest to the user in the video frame in the first microscopic sub-video information, such as coordinates of at least three corner points or coordinates of diagonal points of a rectangular frame in a pixel coordinate system in the video frame. The computing device then performs object tracking on the content in subsequent video frames based on the characteristics of the content of interest to the user in the rectangular box, continuously marking the corresponding region, and so on.
In some embodiments, the method further includes step S106 (not shown), in step S106, the computing device performs contrast matching on the plurality of microimage information presented at the same time by the plurality of first microimage information, and if there is matching degree information of one or more microimage information that satisfies the matching degree threshold information, generates corresponding matching result information and performs a corresponding matching operation, where the matching result information includes presentation time information of the one or more microimage information. For example, the computing device synchronously presents a plurality of first microscopic sub-video information, and performs comparison matching on the plurality of first microscopic sub-video information, determines a matching value of each video frame in each first microscopic sub-video information and video frames in other microscopic sub-videos when the video frames and the video frames are uniformly played on an axis, and generates corresponding matching result information and performs corresponding matching operation if the corresponding matching degree information meets matching degree threshold information, wherein the matching result information comprises presentation time information of each matched video frame or assignment of first microscopic parameters of the corresponding playing axis, and the like, the comparison matching can be performed according to similarity among feature points among the video frames, and the matching degree information comprises proportion of the similar number of the feature points in each video to the total feature points, and the like. In some embodiments, the matching degree information satisfies matching degree threshold information including, but not limited to: the matching degree information is larger than or equal to the first matching degree threshold value information; the matching degree information is smaller than or equal to second matching degree threshold value information. For example, if the computing device determines, based on user requirements, whether the video frames in the first microscopic sub-video information are similar, and the like, if the matching degree is greater than or equal to the first matching degree threshold information, determining that the corresponding video frames are similar; if the computing device determines whether the video frames in the first microscopic sub-video information have differences or not based on the user requirements, and the like, if the matching degree is smaller than or equal to the second matching degree threshold information, the corresponding video frames are determined to have differences. In some embodiments, the matching operation includes, but is not limited to: presenting the matching result information; suspending the presentation of the plurality of first microscopic sub-video information when the presentation time of the plurality of first microscopic sub-video information reaches the presentation time information of the one or more microscopic image information; marking the one or more microscopic image information when the presentation time of the plurality of first microscopic sub-video information reaches the presentation time information of the one or more microscopic image information; and determining a matching region matched with the contrast from the one or more first microscopic sub-video information, and adding a label to the matching region. For example, after the computing device obtains the corresponding matching result information, the corresponding matching result information is presented through the display device, so as to assist the user in analyzing the matching result and the like; for another example, according to the presentation time information of each video frame meeting the conditions, if the first microscopic sub-video is at rest and is played to the corresponding video frame, the computing device pauses to play the first microscopic sub-video information, so that a user can observe the corresponding microscopic data and the like; for another example, if one or more video frames in a certain presentation time meet the condition, the computing device marks the video frames meeting the matching condition according to the video frames meeting the matching condition in the first microscopic sub-video information, so that the user can easily distinguish the video frames meeting the matching condition; for another example, the computing device may further operate on the video that satisfies the conditions according to each matching condition, and mark a corresponding matching region in the video frame, e.g., mark a similar region for similar matching, mark a difference region for difference matching, and so on. In some embodiments, the contrast matches include, but are not limited to: comparing and matching microscopic image information in the plurality of first microscopic sub-video information with template image information of the at least one target object to obtain matching degree information corresponding to the microscopic image information in each piece of first sub-video information; comparing and matching the microscopic image information in the plurality of first microscopic sub-video information with the template image information of the at least one target object to obtain initial matching values, and comprehensively obtaining matching degree information of the microscopic image information in each first microscopic sub-video information according to the initial matching values of each first microscopic sub-video information; comparing and matching microscopic image information of first microscopic sub-video information in the plurality of first microscopic sub-video information with microscopic image information of other microscopic sub-video information except the first microscopic sub-video information, and obtaining matching degree information of the microscopic image information of the first microscopic sub-video information by combining matching results, wherein assignment of the microscopic image information of the first microscopic sub-video information corresponds to assignment of the microscopic image information of the other microscopic sub-video information; and matching the microscopic image information of the first microscopic sub-video information in the plurality of first microscopic sub-video information with the microscopic image information of other microscopic sub-video information except the first microscopic sub-video information in a similarity mode, and taking the microscopic image information with the highest matching degree as the matching degree information corresponding to the microscopic image information of the first microscopic sub-video information, wherein assignment of the microscopic image information of the first microscopic sub-video information corresponds to assignment of the microscopic image information of the other microscopic sub-video information. For example, the corresponding contrast matching may be matching with an existing template, for example, matching with each video frame according to the template, determining a matching ratio of a feature point in each video frame to a feature point of the template, and the like, and taking the ratio as a matching degree corresponding to each video frame; or after matching the template with each video frame, carrying out certain comprehensive treatment (such as weighting average and the like) according to the matching result of each video frame, and taking the treated result as the matching degree corresponding to each video frame; or the comparison matching is to compare and match each video frame with other video frames played at the same moment, determine corresponding matching degree information according to the matching result, for example, present four first microscopic sub-video information, match the current video frame of one first microscopic sub-video information with other three video frames, obtain three matching results, and can determine the corresponding matching degree information through three matching comprehensive processes (such as weighted average, etc.), or take the maximum value or the minimum value in the matching results as the corresponding matching degree information according to the requirement, etc.
In some embodiments, the computing device includes a user device and a network device, and when the step S101 is applied to the user device, the step S101 shown in fig. 2 includes a sub-step S1011 and a sub-step S1012. In step S1011, the user equipment sends a microscopic sub-video request to the network equipment, wherein the microscopic sub-video request includes identification information of a sub-region and a first microscopic parameter, and the plurality of sub-regions are included in at least one target object; in step S1012, the user device receives a plurality of first microscopic sub-video information returned by the network device, where each first microscopic sub-video information includes a plurality of microscopic image information of a corresponding sub-region with respect to a first microscopic parameter sequence, and the first microscopic parameter sequence is assigned by a plurality of assignments including the first microscopic parameter. For example, the user device sends a microscopic sub-video request about the target object to the network device, wherein the microscopic sub-video request includes identification information about the target object and a first microscopic parameter of a playing axis corresponding to the microscopic sub-video information, and the identification information includes an identifier for determining corresponding three-dimensional microscopic video information, and the like. The network equipment stores the corresponding relation between the identification information of the target object and the microscopic image information or the microscopic sub-video information of the target object, the cloud end determines the corresponding microscopic sub-video information based on the identification information of the target object uploaded by the user equipment, determines a plurality of first microscopic sub-video information which accords with the corresponding first microscopic parameter from the determined microscopic sub-video information based on the first microscopic parameter, then the network equipment returns the plurality of first microscopic sub-video information to the user equipment, and the user equipment receives and synchronously presents the plurality of first microscopic sub-video information. The first microscopic parameter sent by the user device may be a type of the first microscopic parameter, or one or more assignments of the first microscopic parameter, and if the first microscopic parameter includes one or more assignments, the first microscopic parameter sequence in the first microscopic sub-video information returned by the network device includes the one or more assignments.
In some embodiments, in step S1012, the user device receives access link information of a plurality of first microscopic sub-video information returned by the network device, where each first microscopic sub-video information includes a plurality of microscopic image information of a corresponding sub-region with respect to a first microscopic parameter sequence, and the first microscopic parameter sequence includes a plurality of assignments of the first microscopic parameters; in step S102, a corresponding web page is accessed according to the access link information, and the plurality of first microscopic sub-video information is synchronously presented on the web page according to the first microscopic parameter sequence. For example, the user device sends a microscopic sub-video request about the target object to the network device, wherein the microscopic sub-video request includes identification information about the target object and a first microscopic parameter of a playing axis corresponding to the microscopic sub-video information, and the identification information includes an identifier for determining corresponding three-dimensional microscopic video information, and the like. The network equipment stores the corresponding relation between the identification information of the target object and the microscopic image information or the microscopic sub-video information of the target object, the cloud end determines corresponding microscopic sub-video information based on the identification information of the target object uploaded by the user equipment, determines a plurality of first microscopic sub-video information conforming to the corresponding first microscopic parameter from the determined microscopic sub-video information based on the first microscopic parameter, generates access link information related to the plurality of first microscopic sub-video information, then returns the access link information to the user equipment, the user equipment receives the access corresponding webpage based on the access link information, and synchronously presents the plurality of first microscopic sub-video information in the corresponding webpage.
In some embodiments, the microscopy sub-video request further comprises a second microscopy parameter, wherein the second microscopy parameter comprises a first microscopy parameter in addition to the first microscopy parameter; in step S1012, the user equipment receives a plurality of first microscopic sub-video information returned by the network equipment, where each first microscopic sub-video information includes a plurality of microscopic image information about a first microscopic parameter sequence corresponding to a sub-region, the first microscopic parameter sequence includes a plurality of assignments of the first microscopic parameter, at least two microscopic sub-videos corresponding to the same sub-region exist in the plurality of first microscopic sub-video information, and assignments of second microscopic parameters of the at least two microscopic sub-video information are different. For example, the user device sends a microscopic sub-video request to the network device based on a request of the user, wherein the request further comprises a second microscopic parameter, the second microscopic parameter is used for acquiring different first microscopic sub-video information of the same sub-region of the same target object, and the second microscopic parameter information comprises other first microscopic parameters besides the first microscopic parameter. The network equipment acquires first microscopic sub-video information corresponding to different second microscopic parameters of the same subarea based on second microscopic parameters of the microscopic sub-video information, wherein if the second microscopic parameters contain specific multiple assignments, the first microscopic sub-video information corresponding to the multiple assignments of the second microscopic parameters is acquired. The network device then returns the plurality of first microscopic sub-video information to the user device.
In some embodiments, the identification information of the sub-region includes: the image processing device comprises a plurality of microscopic image information of the subareas, wherein each subarea corresponds to at least one microscopic image information; key fields of the sub-region; image information of the sub-region; microscopic record information of the subareas; unique identification code information of the sub-region; the indication information of the subareas is used for indicating the range of each subarea in the image of the target object to which each subarea belongs. For example, the identification information includes an identifier or the like for determining corresponding microscopic video information, including but not limited to a plurality of microscopic image information of the sub-region, such as microscopic image information about the sub-region, based on which the network device may obtain corresponding first microscopic sub-video information; or, the identification information includes key fields of the subareas, such as names of the subareas or keywords extracted from the names of the subareas and used for searching the subareas; the identification information comprises microscopic record information of the subareas, such as a history record of microscopic image information or microscopic sub-video information of the subareas, which is uploaded or searched by a user in an application; unique identification code information of the subareas, such as unique identification codes and the like set in the application of the subareas; the identification information may include a plurality of image information of the sub-region, e.g. the network device may identify a corresponding sub-region in a database based on the image information, etc.; the identification information further includes indication information of the sub-areas, where the indication information is used to indicate, in an image of a target object to which each sub-area belongs, a range where each sub-area is located, for example, the current user equipment presents one or more target objects through a display device, and based on a selection operation (such as a right in a weight or clicking a frame selection, etc.) of a user, the user equipment obtains indication information about the sub-areas in the one or more target objects, that is, a sub-area range of a selected area corresponding to the selection operation, where each sub-area is included in each target object video frame. In some embodiments, the plurality of identification information includes indication information of the plurality of sub-regions, where the indication information is used to indicate, in an image of a target object to which each sub-region belongs, a range in which each sub-region is located; wherein the method further comprises a step S107 (not shown), in which step S107 the user equipment determines the range in which said sub-regions are located in the current image of said at least one target object and generates said indication information based on a selected range operation by the user.
Referring to fig. 3, there is shown a method for presenting a plurality of microscopic sub-video information, the method including a user device and a network device, specifically including steps S1011, S1012 and S102 applied to the user device, and steps S201, S202 and S203 applied to the network device, etc:
in step S1011, the user equipment transmits a microscopic sub-video request for a plurality of sub-areas to the network equipment, wherein the microscopic sub-video request includes identification information of the sub-areas and the first microscopic parameter information;
in step S201, the network device receives the microscopic sub-video request;
in step S202, determining a corresponding plurality of first microscopic sub-video information according to the identification information of the sub-region and the first microscopic parameter information, where each first microscopic sub-video information includes a plurality of microscopic image information of the corresponding sub-region about a first microscopic parameter sequence, and the first microscopic parameter sequence includes a plurality of assignments of the first microscopic parameter;
in step S203, the network device returns the plurality of first microscopic sub-video information to the user device;
in step S102, the user equipment receives the plurality of first microscopic sub-video information returned by the cloud, and synchronously presents the plurality of first microscopic sub-video information according to the first microscopic parameter sequence.
Referring to the method shown in fig. 3, fig. 4 shows a method for presenting a plurality of microscopic sub-video information, which is applied to a network device, including step S201, step S202, and step S203. In step S201, the network device receives a microscopic sub-video request sent by the user device, where the microscopic sub-video request includes identification information of a sub-region and the first microscopic parameter; in step S202, the network device determines a corresponding plurality of first microscopic sub-video information according to the identification information of the sub-region and the first microscopic parameter, where each first microscopic sub-video information includes a plurality of microscopic image information about a first microscopic parameter sequence of the corresponding sub-region, and the first microscopic parameter sequence includes a plurality of assignments corresponding to the first microscopic parameter; in step S203, the network device returns the plurality of first microscopic sub-video information to the user device. For example, the user device sends a microscopic sub-video request about the target object to the network device, wherein the microscopic sub-video request includes identification information about the target object and a first microscopic parameter of a playing axis corresponding to the microscopic sub-video information, and the identification information includes an identifier for determining corresponding three-dimensional microscopic video information, and the like. The network equipment stores the corresponding relation between the identification information of the target object and the microscopic image information or the microscopic sub-video information of the target object, the cloud end determines the corresponding microscopic sub-video information based on the identification information of the target object uploaded by the user equipment, determines a plurality of first microscopic sub-video information which accords with the corresponding first microscopic parameter from the determined microscopic sub-video information based on the first microscopic parameter, then the network equipment returns the plurality of first microscopic sub-video information to the user equipment, and the user equipment receives and synchronously presents the plurality of first microscopic sub-video information. The first microscopic parameter sent by the user device may be a type of the first microscopic parameter, or one or more assignments of the first microscopic parameter, and if the first microscopic parameter includes one or more assignments, the first microscopic parameter sequence in the first microscopic sub-video information returned by the network device includes the one or more assignments. Wherein the identification information includes an identifier or the like for determining corresponding microscopic video information, including but not limited to a plurality of microscopic image information of the sub-region, such as microscopic image information about the sub-region, based on which the network device may obtain corresponding first microscopic sub-video information; or, the identification information includes key fields of the subareas, such as names of the subareas or keywords extracted from the names of the subareas and used for searching the subareas; the identification information comprises microscopic record information of the subareas, such as a history record of microscopic image information or microscopic sub-video information of the subareas, which is uploaded or searched by a user in an application; unique identification code information of the subareas, such as unique identification codes and the like set in the application of the subareas; the identification information may include a plurality of image information of the sub-region, e.g. the network device may identify a corresponding sub-region in a database based on the image information, etc.; the identification information further includes indication information of the sub-areas, where the indication information is used to indicate, in an image of a target object to which each sub-area belongs, a range where each sub-area is located, for example, the current user equipment presents one or more target objects through a display device, and based on a selection operation (such as a right in a weight or clicking a frame selection, etc.) of a user, the user equipment obtains indication information about the sub-areas in the one or more target objects, that is, a sub-area range of a selected area corresponding to the selection operation, where each sub-area is included in each target object video frame. In some embodiments, in step S202, the network device determines a corresponding plurality of second microscopic sub-video information according to the identification information of the sub-region and the first microscopic parameter, where each second microscopic sub-video information has a second microscopic parameter sequence of the corresponding sub-region, each second microscopic sub-video information includes a plurality of microscopic image information of the corresponding second microscopic parameter sequence, and the second microscopic parameter sequence includes a plurality of assignments corresponding to the plurality of first microscopic parameters; if at least one public microscopic parameter subsequence exists in the second microscopic parameter sequences corresponding to the plurality of first microscopic sub-video information, determining a corresponding first microscopic parameter sequence according to the at least one public microscopic parameter subsequence, and determining first microscopic sub-video information corresponding to each piece of second microscopic sub-video information according to the first microscopic parameter sequence, wherein the first microscopic sub-video information belongs to a sub-video of the second microscopic sub-video information, and the first microscopic parameter sequence belongs to a sub-sequence of the second microscopic parameter sequence. For example, the network device determines a plurality of second microscopic sub-video information based on the first microscopic parameters as playing axes according to the first microscopic parameters in the microscopic sub-video request sent by the user device, where the first microscopic parameters corresponding to each second microscopic sub-video information playing axis may be different, that is, the corresponding second microscopic parameter sequences are different, and determines a common parameter sequence according to each second microscopic parameter sequence, and determines a corresponding first microscopic parameter sequence according to the common parameter sequence, for example, combines all the common parameter sequences to determine a new first microscopic parameter sequence, or takes a section of common parameter sequence as a corresponding first parameter sequence, and performs sorting or deleting treatment on video frames of each second microscopic sub-video information according to assignment of the first microscopic parameters in the first parameter sequence to obtain the corresponding first microscopic sub-video information. If the microscopy sub-video information contains one or more assignments of the first microscopy parameter, the first sequence of microscopy parameters should contain the one or more assignments.
In some embodiments, the microscopy sub-video request further comprises second microscopy parameter information, wherein the second microscopy parameter information the second microscopy parameter comprises other first microscopy parameters in addition to the first microscopy parameter; in step S202, the network device determines a plurality of corresponding first microscopic sub-video information according to the identification information of the sub-region and the first microscopic parameters, where each first microscopic sub-video information includes a plurality of microscopic image information related to a first microscopic parameter sequence of the corresponding sub-region, the first microscopic parameter sequence includes a plurality of assignments corresponding to the first microscopic parameters, at least two microscopic sub-videos exist in the plurality of first microscopic sub-video information and correspond to the same sub-region, and assignments of second microscopic parameter information of the at least two microscopic sub-video information are different. For example, the user device sends a microscopic sub-video request to the network device based on a request of the user, wherein the request further comprises a second microscopic parameter, the second microscopic parameter is used for acquiring different first microscopic sub-video information of the same sub-region of the same target object, and the second microscopic parameter information comprises other first microscopic parameters besides the first microscopic parameter. The network equipment acquires first microscopic sub-video information corresponding to different second microscopic parameters of the same subarea based on second microscopic parameters of the microscopic sub-video information, wherein if the second microscopic parameters contain specific multiple assignments, the first microscopic sub-video information corresponding to the multiple assignments of the second microscopic parameters is acquired. The network device then returns the plurality of first microscopic sub-video information to the user device.
Fig. 5 illustrates an apparatus for presenting a plurality of microscopic sub-video information according to one aspect of the present application, wherein the apparatus includes a one-to-one module 101 and a two-to-two module 102. The one-to-one module 101 is configured to obtain a plurality of first microscopic sub-video information, where each first microscopic sub-video information includes a plurality of microscopic image information about a first microscopic parameter sequence corresponding to a sub-region, and the first microscopic parameter sequence includes a plurality of assignments corresponding to the first microscopic parameter; and a second module 102, configured to synchronously present the plurality of first microscopic sub-video information according to the first microscopic parameter sequence. The method is applicable to a computing device, wherein the computing device comprises but is not limited to a user device, a network device, or a device formed by integrating the user device and the network device through a network, the user device comprises but is not limited to any terminal capable of performing man-machine interaction with a user (such as performing man-machine interaction through a touch pad), and the network device comprises but is not limited to a computer, a network host, a single network server, a cloud formed by a plurality of network servers or a plurality of servers.
Specifically, the one-to-one module 101 is configured to obtain a plurality of first microscopic sub-video information, where each first microscopic sub-video information includes a plurality of microscopic image information about a first microscopic parameter sequence corresponding to a sub-region, and the first microscopic parameter sequence includes a plurality of assignments corresponding to the first microscopic parameter. For example, the first microscopic sub-video information includes microscopic video information generated from a plurality of microscopic image information of the sub-region in accordance with a first sequence of microscopic parameters and specific video parameters, wherein a first one of the first microscopic sub-video information does not represent any specific order; the microscopic sub-video information may also be different according to the video information generated by the dimensional information of the microscopic image information, for example, the microscopic image information may include two-dimensional microscopic image information and/or three-dimensional microscopic image information, and the corresponding first microscopic sub-video information may include two-dimensional microscopic sub-video information and/or three-dimensional microscopic sub-video information, and in some embodiments, the first microscopic sub-video information may include, but is not limited to: two-dimensional microscopic sub-video information; three-dimensional microscopic sub-video information. The two-dimensional image information may be obtained by combining a high-definition image pickup device (such as a CCD camera) with an optical lens of a microscope, or may be obtained by collecting photographed images of a target object or a sub-region of the target object, or may be obtained by combining the whole of the sub-region with the two-dimensional microscopic image information according to a clearer part of a depth of field of each of a plurality of photographed images of the sub-region; the two-dimensional microscopic sub-video information is generated according to the two-dimensional image information of the sub-region. The three-dimensional image information is three-dimensional microscopic image information with clearer whole of the subarea generated according to clearer parts in depth of field of each shooting image of the plurality of shooting images of the subarea and height information of each pixel point, and a coordinate system of the three-dimensional microscopic image information can be a world coordinate system established by taking the center of the subarea as an origin, wherein the focal plane height information of each shooting image comprises coordinate information of an upper shooting image in a Z-axis direction (moving direction of an objective lens).
The first microscopic sub-video information corresponds to microscopic image information and is shot according to a first microscopic parameter when shooting, the first microscopic parameter sequence comprises a plurality of assignments of the first microscopic parameter, the assignments are determined according to the values of the parameters corresponding to the subareas when the shooting device acquires shooting images, the first microscopic parameter usually changes within a certain range in the shooting process, the corresponding shooting device shoots shooting images related to the subareas in the changing process, the variable value of the first microscopic parameter when shooting is carried out when the time node is shooting of the subareas is recorded, and the variable value is used as the assignment of the first microscopic parameter. In some embodiments, the first microscopic parameters include, but are not limited to: shooting time information; focal plane height information; rotation angle information; pitch angle information; yaw angle information; brightness information of the illuminating lamp; lighting light color information; temperature information; humidity information; PH value information; fluorescence band information; polarized light angle information; DIC rotation angle information. For example, the first microscopic parameter information includes an independent variable parameter that can be used for continuous gradual change in a microscopic system where the target object is located, where the value of the parameter may be a specific value, or a section, for example, a section corresponding to [ T-T0, t+t0], and so on. The operation of the specific parameters related to the first microscopic parameters is the same as or similar to that of the embodiment shown in fig. 1, and is not described in detail herein, and is incorporated by reference.
In some embodiments, a module 101 is configured to acquire a plurality of microscopic image information about the first microscopic parameter sequence for each sub-region; and determining first microscopic sub-video information of each sub-region according to the first microscopic parameter sequence and the plurality of microscopic image information of each sub-region in each sub-region. The operation of the first microscopic sub-video generation of the sub-region is the same as or similar to that of the related embodiment shown in fig. 1, and is not described in detail herein, and is incorporated by reference.
In some embodiments, the first microscopic sub-video information may be a microscopic video corresponding to a sub-region in a different target object, or may be a microscopic video corresponding to a different sub-region in the same target object, or may even be a microscopic video corresponding to the same sub-region of the same target object under different second microscopic parameters, where the second microscopic parameters are different from the current first microscopic parameters, for example, the current first microscopic sub-video information uses shooting time information as a playing axis, and the second microscopic parameters of the first microscopic sub-video information of the sub-region are one of other first microscopic parameters (such as shooting height) except for shooting time. For example, when at least two pieces of first microscopic sub-video information exist in the plurality of pieces of first microscopic sub-video information corresponding to the same sub-area, assignment of second microscopic parameters of the at least two pieces of microscopic sub-video information is different, wherein the second microscopic parameters include other first microscopic parameters besides the first microscopic parameters.
And a second module 102, configured to synchronously present the plurality of first microscopic sub-video information according to the first microscopic parameter sequence. For example, the first microscopic parameter sequence is determined according to a specific sequence of multiple assignments of the first microscopic parameters, where the multiple assignments correspond to shooting moments of corresponding microscopic image information of the first microscopic parameters in a gradual change process, based on the corresponding assignments of each microscopic image information in shooting and the first microscopic parameter sequence information, multiple first microscopic sub-video information can be synchronously presented through a display device, for example, the first microscopic sub-video information is divided into multiple areas (which can be equally divided or can be divided according to the requirement, etc.) in a current display screen, the corresponding multiple first microscopic sub-video information is presented in the multiple areas, and the assignment of the corresponding first microscopic parameter of each first microscopic sub-video information in a playing axis at the same moment is the same, for example, the computing device acquires multiple first microscopic sub-video information, where the multiple first microscopic sub-video information is video information of a sub-area under the focal plane height corresponding to a to B, and when the multiple first microscopic sub-video information is presented in the display screen, the focal plane height corresponding to each first microscopic sub-video information area at each moment is the same.
In some embodiments, the apparatus further includes a three module 103 (not shown) for generating corresponding adjustment instructions based on user adjustment of the plurality of first microscopic sub-video information; the second module 102 is configured to synchronously present the plurality of first microscopic sub-video information based on the control command and the first microscopic parameter sequence. In some embodiments, the regulatory instructions include, but are not limited to: switching the current first microscopic parameter to a third microscopic parameter, wherein the third microscopic parameter comprises one of the other first microscopic parameters except the current first microscopic parameter information; converting video dimension information of the plurality of first microscopic sub-video information, wherein the video dimension information comprises two-dimensional video information and three-dimensional video information; and adding marking information in at least one first microscopic sub-video information in the plurality of first microscopic sub-video information. The specific implementation of a three-module 103 is the same as or similar to the embodiment of the related step S103 shown in fig. 1, and is not described herein again, and is incorporated herein by reference.
In some real-time modes, the apparatus further includes a fourth module 104 (not shown), and the fourth module 104 is configured to obtain a plurality of second microscopic sub-video information according to the regulation command, where each third microscopic sub-video information includes a plurality of microscopic image information of a corresponding sub-region with respect to a third microscopic parameter sequence, each third microscopic sub-video information has a first microscopic sub-video information corresponding to the plurality of first microscopic sub-videos, and the third microscopic parameter sequence includes a plurality of assignments corresponding to the third microscopic parameters; the second module 102 is configured to synchronously present the third microscopic sub-video information in the third microscopic parameter sequence. The specific implementation of a four-module 104 is the same as or similar to the embodiment of the related step S104 shown in fig. 1, and is not described herein again, and is incorporated herein by reference.
For example, the regulatory instruction information includes adding marker information to at least one of the plurality of first microscopic sub-video information; the second module 102 is configured to synchronously present the plurality of first microscopic sub-video information according to the first microscopic parameter sequence, and superimpose and present the marking information in the at least one first microscopic sub-video information; wherein the apparatus comprises a five module 105 (not shown) for tracking the superimposed display of said marker information in subsequent video frames of said at least one first microscopic sub-video information. Here, the embodiment of a five-module 105 is the same as or similar to the embodiment of the related step S105 shown in fig. 1, and are not described in detail herein, are incorporated by reference.
In some embodiments, the apparatus further includes a sixth module 106 (not shown) configured to compare and match the plurality of microscopic image information presented at the same time of the plurality of first microscopic sub-video information, and if there is matching degree information of one or more pieces of microscopic image information that satisfies the matching degree threshold information, generate corresponding matching result information and perform a corresponding matching operation, where the matching result information includes presentation time information of the one or more pieces of microscopic image information. In some embodiments, the matching degree information satisfies matching degree threshold information including, but not limited to: the matching degree information is larger than or equal to the first matching degree threshold value information; the matching degree information is smaller than or equal to equal to the second matching degree threshold information. In some embodiments, the matching operation includes, but is not limited to: presenting the matching result information; suspending the presentation of the plurality of first microscopic sub-video information when the presentation time of the plurality of first microscopic sub-video information reaches the presentation time information of the one or more microscopic image information; marking the one or more microscopic image information when the presentation time of the plurality of first microscopic sub-video information reaches the presentation time information of the one or more microscopic image information; and determining a matching region matched with the contrast from the one or more first microscopic sub-video information, and adding a label to the matching region. In some embodiments, the contrast matches include, but are not limited to: comparing and matching microscopic image information in the plurality of first microscopic sub-video information with template image information of the at least one target object to obtain matching degree information corresponding to the microscopic image information in each piece of first sub-video information; comparing and matching the microscopic image information in the plurality of first microscopic sub-video information with the template image information of the at least one target object to obtain initial matching values, and comprehensively obtaining matching degree information of the microscopic image information in each first microscopic sub-video information according to the initial matching values of each first microscopic sub-video information; comparing and matching microscopic image information of first microscopic sub-video information in the plurality of first microscopic sub-video information with microscopic image information of other microscopic sub-video information except the first microscopic sub-video information, and obtaining matching degree information of the microscopic image information of the first microscopic sub-video information by combining matching results, wherein assignment of the microscopic image information of the first microscopic sub-video information corresponds to assignment of the microscopic image information of the other microscopic sub-video information; and matching the microscopic image information of the first microscopic sub-video information in the plurality of first microscopic sub-video information with the microscopic image information of other microscopic sub-video information except the first microscopic sub-video information in a similarity mode, and taking the microscopic image information with the highest matching degree as the matching degree information corresponding to the microscopic image information of the first microscopic sub-video information, wherein assignment of the microscopic image information of the first microscopic sub-video information corresponds to assignment of the microscopic image information of the other microscopic sub-video information. The operation of the contrast matching process is the same as or similar to that of the related embodiment shown in fig. 1, and is not described in detail herein, and is incorporated by reference.
In some embodiments, the computing device includes a user device and a network device, and when the one-to-one module 101 is applied to the user device, the one-to-one module 101 includes a one-to-one unit 1011 and a one-to-two unit 1012. A one-to-one unit 1011, configured to send a microscopic sub-video request to a network device, where the microscopic sub-video request includes identification information of a sub-region and a first microscopic parameter, and the plurality of sub-regions are included in at least one target object; a second unit 1012 is configured to receive a plurality of first microscopic sub-video information returned by the network device, where each first microscopic sub-video information includes a plurality of microscopic image information of a corresponding sub-region with respect to a first microscopic parameter sequence, and the first microscopic parameter sequence is assigned by a plurality of assignments including the first microscopic parameter. The embodiments of the one-to-one unit 1011 and the one-to-two unit 1012 are the same as or similar to the embodiments of the related steps S1011 and S1012 shown in fig. 2, and are not described in detail herein, and are incorporated herein by reference.
In some embodiments, a one-to-two unit 1012 is configured to receive access link information of a plurality of first microscopic sub-video information returned by the network device, where each first microscopic sub-video information includes a plurality of microscopic image information of a corresponding sub-region with respect to a first microscopic parameter sequence, and the first microscopic parameter sequence includes a plurality of assignments of the first microscopic parameters; the second module 102 is configured to access a corresponding web page according to the access link information, and synchronously present the plurality of first microscopic sub-video information on the web page according to the first microscopic parameter sequence. The operation of synchronously presenting the plurality of first microscopic sub-video information according to the access link information is the same as or similar to that of the related embodiment shown in fig. 2, and is not described in detail herein, and is incorporated by reference.
In some embodiments, the microscopy sub-video request further comprises a second microscopy parameter, wherein the second microscopy parameter comprises a first microscopy parameter in addition to the first microscopy parameter; in step S1012, the user equipment receives a plurality of first microscopic sub-video information returned by the network equipment, where each first microscopic sub-video information includes a plurality of microscopic image information about a first microscopic parameter sequence corresponding to a sub-region, the first microscopic parameter sequence includes a plurality of assignments of the first microscopic parameter, at least two microscopic sub-videos corresponding to the same sub-region exist in the plurality of first microscopic sub-video information, and assignments of second microscopic parameters of the at least two microscopic sub-video information are different. The operation of the microscopy sub-video request further including the second microscopy parameter is the same as or similar to the related embodiment shown in fig. 2, and is not described in detail herein and incorporated by reference.
In some embodiments, the identification information of the sub-region includes: the image processing device comprises a plurality of microscopic image information of the subareas, wherein each subarea corresponds to at least one microscopic image information; key fields of the sub-region; image information of the sub-region; microscopic record information of the subareas; unique identification code information of the sub-region; the indication information of the subareas is used for indicating the range of each subarea in the image of the target object to which each subarea belongs. In some embodiments, the plurality of identification information includes indication information of the plurality of sub-regions, where the indication information is used to indicate, in an image of a target object to which each sub-region belongs, a range in which each sub-region is located; wherein the method further comprises a step S107 (not shown), in which step S107 the user equipment determines the range in which said sub-regions are located in the current image of said at least one target object and generates said indication information based on a selected range operation by the user. The operation of the identification information of the sub-area is the same as or similar to that of the related embodiment shown in fig. 2, and is not described in detail herein, and is incorporated by reference.
Referring to fig. 3, a method for presenting a plurality of microscopic sub-video information is shown, the method includes a user device and a network device, specifically including a one-to-one unit 1011, a one-to-two unit 1012, and a two-to-three module 102 applied to the user device, and a two-to-one module 201, a two-to-two module 202, a two-to-three module 203 applied to the network device, and the like; fig. 6 shows a network device for presenting a plurality of microscopic sub-video information, comprising a second one module 201, a second two module 202 and a second three module 203. A second module 201, configured to receive a microscopic sub-video request sent by a user equipment, where the microscopic sub-video request includes identification information of a sub-area and the first microscopic parameter; the second-second module 202 is configured to determine a corresponding plurality of first microscopic sub-video information according to the identification information of the sub-region and the first microscopic parameter, where each first microscopic sub-video information includes a plurality of microscopic image information about a first microscopic parameter sequence corresponding to the sub-region, and the first microscopic parameter sequence includes a plurality of assignments corresponding to the first microscopic parameter; and a second and third module 203, configured to return the plurality of first microscopic sub-video information to the user equipment. For example, the user device sends a microscopic sub-video request about the target object to the network device, wherein the microscopic sub-video request includes identification information about the target object and a first microscopic parameter of a playing axis corresponding to the microscopic sub-video information, and the identification information includes an identifier for determining corresponding three-dimensional microscopic video information, and the like. The network equipment stores the corresponding relation between the identification information of the target object and the microscopic image information or the microscopic sub-video information of the target object, the cloud end determines the corresponding microscopic sub-video information based on the identification information of the target object uploaded by the user equipment, determines a plurality of first microscopic sub-video information which accords with the corresponding first microscopic parameter from the determined microscopic sub-video information based on the first microscopic parameter, then the network equipment returns the plurality of first microscopic sub-video information to the user equipment, and the user equipment receives and synchronously presents the plurality of first microscopic sub-video information. The first microscopic parameter sent by the user device may be a type of the first microscopic parameter, or one or more assignments of the first microscopic parameter, and if the first microscopic parameter includes one or more assignments, the first microscopic parameter sequence in the first microscopic sub-video information returned by the network device includes the one or more assignments. Wherein the identification information includes an identifier or the like for determining corresponding microscopic video information, including but not limited to a plurality of microscopic image information of the sub-region, such as microscopic image information about the sub-region, based on which the network device may obtain corresponding first microscopic sub-video information; or, the identification information includes key fields of the subareas, such as names of the subareas or keywords extracted from the names of the subareas and used for searching the subareas; the identification information comprises microscopic record information of the subareas, such as a history record of microscopic image information or microscopic sub-video information of the subareas, which is uploaded or searched by a user in an application; unique identification code information of the subareas, such as unique identification codes and the like set in the application of the subareas; the identification information may include a plurality of image information of the sub-region, e.g. the network device may identify a corresponding sub-region in a database based on the image information, etc.; the identification information further includes indication information of the sub-areas, where the indication information is used to indicate, in an image of a target object to which each sub-area belongs, a range where each sub-area is located, for example, the current user equipment presents one or more target objects through a display device, and based on a selection operation (such as a right in a weight or clicking a frame selection, etc.) of a user, the user equipment obtains indication information about the sub-areas in the one or more target objects, that is, a sub-area range of a selected area corresponding to the selection operation, where each sub-area is included in each target object video frame. In some embodiments, the second module 202 is configured to determine a corresponding plurality of second microscopic sub-video information according to the identification information of the sub-region and the first microscopic parameter, where each second microscopic sub-video information has a second microscopic parameter sequence of the corresponding sub-region, each second microscopic sub-video information includes a plurality of microscopic image information of the corresponding second microscopic parameter sequence, and the second microscopic parameter sequence includes a plurality of assignments corresponding to the plurality of first microscopic parameters; if at least one public microscopic parameter subsequence exists in the second microscopic parameter sequences corresponding to the plurality of first microscopic sub-video information, determining a corresponding first microscopic parameter sequence according to the at least one public microscopic parameter subsequence, and determining first microscopic sub-video information corresponding to each piece of second microscopic sub-video information according to the first microscopic parameter sequence, wherein the first microscopic sub-video information belongs to a sub-video of the second microscopic sub-video information, and the first microscopic parameter sequence belongs to a sub-sequence of the second microscopic parameter sequence. The determination operation of the first microscopic sub-video information is the same as or similar to that of the related embodiment shown in fig. 4, and is not described herein in detail, and is incorporated herein by reference.
In some embodiments, the microscopy sub-video request further comprises second microscopy parameter information, wherein the second microscopy parameter information the second microscopy parameter comprises other first microscopy parameters in addition to the first microscopy parameter; the second module 202 is configured to determine a corresponding plurality of first microscopic sub-video information according to the identification information of the sub-region and the first microscopic parameter, where each first microscopic sub-video information includes a plurality of microscopic image information related to a first microscopic parameter sequence of the corresponding sub-region, the first microscopic parameter sequence includes a plurality of assignments corresponding to the first microscopic parameter, at least two microscopic sub-videos corresponding to the same sub-region exist in the plurality of first microscopic sub-video information, and assignments of second microscopic parameter information of the at least two microscopic sub-video information are different. The related operations of the microscopy sub-video request further including the second microscopy parameter information are the same as or similar to those of the related embodiment shown in fig. 4, and are not described in detail herein, and are incorporated herein by reference.
In addition to the methods and apparatus described in the above embodiments, the present application also provides a computer-readable storage medium storing computer code which, when executed, performs a method as described in any one of the preceding claims.
The present application also provides a computer program product which, when executed by a computer device, performs a method as claimed in any preceding claim.
The present application also provides a computer device comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
FIG. 7 illustrates an exemplary system that may be used to implement various embodiments described herein;
in some embodiments, as shown in fig. 7, system 700 can function as any of the above-described devices of the various described embodiments. In some embodiments, system 700 can include one or more computer-readable media (e.g., system memory or NVM/storage 720) having instructions and one or more processors (e.g., processor(s) 705) coupled with the one or more computer-readable media and configured to execute the instructions to implement the modules to perform the actions described herein.
For one embodiment, system control module 710 may include any suitable interface controller to provide any suitable interface to at least one of processor(s) 705 and/or any suitable device or component in communication with system control module 710.
The system control module 710 may include a memory controller module 730 to provide an interface to the system memory 715. Memory controller module 730 may be a hardware module, a software module, and/or a firmware module.
The system memory 715 may be used to load and store data and/or instructions for the system 700, for example. For one embodiment, system memory 715 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 715 may include a double data rate type four synchronous dynamic random access memory (DDR 4 SDRAM).
For one embodiment, system control module 710 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage device 720 and communication interface(s) 725.
For example, NVM/storage 720 may be used to store data and/or instructions. NVM/storage 720 may include any suitable nonvolatile memory (e.g., flash memory) and/or may include any suitable nonvolatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 720 may include a storage resource that is physically part of the device on which system 700 is installed or it may be accessed by the device without being part of the device. For example, NVM/storage 720 may be accessed over a network via communication interface(s) 725.
Communication interface(s) 725 may provide an interface for system 700 to communicate over one or more networks and/or with any other suitable device. The system 700 may wirelessly communicate with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 705 may be packaged together with logic of one or more controllers (e.g., memory controller module 730) of the system control module 710. For one embodiment, at least one of the processor(s) 705 may be packaged together with logic of one or more controllers of the system control module 710 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 705 may be integrated on the same die with logic of one or more controllers of the system control module 710. For one embodiment, at least one of the processor(s) 705 may be integrated on the same die with logic of one or more controllers of the system control module 710 to form a system on chip (SoC).
In various embodiments, system 700 may be, but is not limited to being: a server, workstation, desktop computing device, or mobile computing device (e.g., laptop computing device, handheld computing device, tablet, netbook, etc.). In various embodiments, system 700 may have more or fewer components and/or different architectures. For example, in some embodiments, system 700 includes one or more cameras, keyboards, liquid Crystal Display (LCD) screens (including touch screen displays), non-volatile memory ports, multiple antennas, graphics chips, application Specific Integrated Circuits (ASICs), and speakers.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, using Application Specific Integrated Circuits (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions as described above. Likewise, the software programs of the present application (including associated data structures) may be stored on a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. In addition, some steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Furthermore, portions of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application by way of operation of the computer. Those skilled in the art will appreciate that the form of computer program instructions present in a computer readable medium includes, but is not limited to, source files, executable files, installation package files, etc., and accordingly, the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Herein, a computer-readable medium may be any available computer-readable storage medium or communication medium that can be accessed by a computer.
Communication media includes media whereby a communication signal containing, for example, computer readable instructions, data structures, program modules, or other data, is transferred from one system to another. Communication media may include conductive transmission media such as electrical cables and wires (e.g., optical fibers, coaxial, etc.) and wireless (non-conductive transmission) media capable of transmitting energy waves, such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied as a modulated data signal, for example, in a wireless medium, such as a carrier wave or similar mechanism, such as that embodied as part of spread spectrum technology. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory, such as random access memory (RAM, DRAM, SRAM); and nonvolatile memory such as flash memory, various read only memory (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, feRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed computer-readable information/data that can be stored for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to operate a method and/or a solution according to the embodiments of the present application as described above.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.

Claims (24)

1. A method for presenting a plurality of microscopic sub-video information, wherein the method comprises:
acquiring a plurality of first microscopic sub-video information, wherein each first microscopic sub-video information comprises a plurality of microscopic image information of a corresponding subarea relative to a first microscopic parameter sequence, the first microscopic parameter sequence comprises a plurality of assignments corresponding to first microscopic parameters, the first microscopic sub-video information comprises microscopic video information which is generated by sorting a plurality of microscopic images of the corresponding subarea according to assignment sequences in the first microscopic parameter sequence and specific video parameters, and a playing axis of the microscopic video information is video information of the first microscopic parameters;
synchronously presenting the plurality of first microscopic sub-video information according to the first microscopic parameter sequence;
generating corresponding regulation and control instructions based on regulation and control operations of the users on the plurality of first microscopic sub-video information, wherein the regulation and control instructions comprise switching the current first microscopic parameters to third microscopic parameters, and the third microscopic parameters comprise one of other first microscopic parameters except the current first microscopic parameters;
acquiring a plurality of third microscopic sub-video information according to the regulation and control instruction, wherein each third microscopic sub-video information comprises a plurality of microscopic image information of a corresponding sub-region about a third microscopic parameter sequence, each third microscopic sub-video information corresponds to one first microscopic sub-video information in the plurality of first microscopic sub-videos, and the third microscopic parameter sequence comprises a plurality of assignments corresponding to the third microscopic parameters;
Wherein the synchronously presenting the plurality of first microscopic sub-video information according to the first microscopic parameter sequence includes:
synchronously presenting the plurality of third microscopic sub-video information according to the third microscopic parameter sequence;
wherein the first microscopic parameters include:
focal plane height information;
rotation angle information;
pitch angle information;
yaw angle information;
brightness information of the illuminating lamp;
lighting light color information;
temperature information;
humidity information;
PH value information;
fluorescence band information;
polarized light angle information;
DIC rotation angle information.
2. The method of claim 1, wherein the acquiring a plurality of first microscopic sub-video information, wherein each first microscopic sub-video information includes a plurality of microscopic image information of a corresponding sub-region with respect to a first microscopic parameter sequence, the first microscopic parameter sequence including a corresponding plurality of assignments of first microscopic parameters, comprises:
acquiring a plurality of microscopic image information of each subarea about the first microscopic parameter sequence;
and determining first microscopic sub-video information of each sub-region according to the first microscopic parameter sequence and the plurality of microscopic image information of each sub-region in each sub-region.
3. The method according to claim 1 or 2, wherein when at least two of the plurality of first microscopic sub-video information correspond to the same sub-area, the assignment of second microscopic parameters of the at least two microscopic sub-video information is different, wherein the second microscopic parameters include other first microscopic parameters than the first microscopic parameters.
4. The method of claim 1, wherein the first microscopic parameters further comprise shot time information.
5. The method of claim 1, wherein the first microscopic sub-video information comprises at least one of:
two-dimensional microscopic sub-video information;
three-dimensional microscopic sub-video information.
6. The method of claim 1, wherein the regulatory instructions further comprise at least any one of:
converting video dimension information of the plurality of first microscopic sub-video information, wherein the video dimension information comprises two-dimensional video information and three-dimensional video information;
and adding marking information in at least one first microscopic sub-video information in the plurality of first microscopic sub-video information.
7. The method of claim 6, wherein the regulatory instruction information further comprises adding marker information to at least one of the plurality of first microscopic sub-video information; wherein the synchronously presenting the plurality of first microscopic sub-video information according to the first microscopic parameter sequence further comprises:
Synchronously presenting the plurality of first microscopic sub-video information according to the first microscopic parameter sequence, and superposing and presenting the marking information in the at least one first microscopic sub-video information;
wherein the method further comprises:
and tracking and superposing the marking information in a subsequent video frame of the at least one first microscopic sub-video information.
8. The method of claim 1, wherein the method further comprises:
and comparing and matching the plurality of pieces of microscopic image information presented at the same time of the plurality of pieces of first microscopic sub-video information, and if the matching degree information of one or more pieces of microscopic image information meets the matching degree threshold information, generating corresponding matching result information and executing corresponding matching operation, wherein the matching result information comprises the presentation time information of the one or more pieces of microscopic image information.
9. The method of claim 8, wherein the matching operation comprises at least any one of:
presenting the matching result information;
suspending the presentation of the plurality of first microscopic sub-video information when the presentation time of the plurality of first microscopic sub-video information reaches the presentation time information of the one or more microscopic image information;
Marking the one or more microscopic image information when the presentation time of the plurality of first microscopic sub-video information reaches the presentation time information of the one or more microscopic image information;
and determining a matching region matched with the contrast from the one or more first microscopic sub-video information, and adding a label to the matching region.
10. The method of claim 8 or 9, wherein the contrast matching comprises at least one of:
comparing and matching microscopic image information in the plurality of first microscopic sub-video information with template image information of at least one target object to obtain matching degree information corresponding to the microscopic image information in each first sub-video information;
comparing and matching the microscopic image information in the plurality of first microscopic sub-video information with the template image information of the at least one target object to obtain initial matching values, and comprehensively obtaining matching degree information of the microscopic image information in each first microscopic sub-video information according to the initial matching values of each first microscopic sub-video information;
comparing and matching microscopic image information of first microscopic sub-video information in the plurality of first microscopic sub-video information with microscopic image information of other microscopic sub-video information except the first microscopic sub-video information, and obtaining matching degree information of the microscopic image information of the first microscopic sub-video information by combining matching results, wherein assignment of the microscopic image information of the first microscopic sub-video information corresponds to assignment of the microscopic image information of the other microscopic sub-video information;
And matching the microscopic image information of the first microscopic sub-video information in the plurality of first microscopic sub-video information with the microscopic image information of other microscopic sub-video information except the first microscopic sub-video information in a similarity mode, and taking the microscopic image information with the highest matching degree as the matching degree information corresponding to the microscopic image information of the first microscopic sub-video information, wherein assignment of the microscopic image information of the first microscopic sub-video information corresponds to assignment of the microscopic image information of the other microscopic sub-video information.
11. The method of claim 10, wherein the matching degree information satisfies matching degree threshold information includes at least any one of:
the matching degree information is larger than or equal to the first matching degree threshold value information;
the matching degree information is smaller than or equal to second matching degree threshold value information.
12. The method of claim 1, wherein the method is applied to a user equipment, the acquiring a plurality of first microscopic sub-video information, wherein each first microscopic sub-video information includes a plurality of microscopic image information of a corresponding sub-region with respect to a first microscopic parameter sequence, the first microscopic parameter sequence including a plurality of assignments corresponding to the first microscopic parameters, comprising:
Transmitting a microscopic sub-video request to a network device, wherein the microscopic sub-video request comprises identification information of sub-regions and first microscopic parameters, and the plurality of sub-regions are contained in at least one target object;
and receiving a plurality of first microscopic sub-video information returned by the network equipment, wherein each first microscopic sub-video information comprises a plurality of microscopic image information of a corresponding sub-region about a first microscopic parameter sequence, and the first microscopic parameter sequence is assigned by a plurality of assignments comprising the first microscopic parameters.
13. The method of claim 12, wherein the receiving the plurality of first microscopic sub-video information returned by the network device, wherein each first microscopic sub-video information includes a plurality of microscopic image information for a corresponding sub-region with respect to a first sequence of microscopic parameters, the first sequence of microscopic parameters being populated by a plurality of assignments including the first microscopic parameters, comprises:
receiving access link information of a plurality of first microscopic sub-video information returned by the network equipment, wherein each first microscopic sub-video information comprises a plurality of microscopic image information of a corresponding sub-region about a first microscopic parameter sequence, and the first microscopic parameter sequence comprises a plurality of assignments of the first microscopic parameters;
Wherein the synchronously presenting the plurality of first microscopic sub-video information according to the first microscopic parameter sequence includes:
and accessing a corresponding webpage according to the access link information, and synchronously presenting the plurality of first microscopic sub-video information on the webpage according to the first microscopic parameter sequence.
14. The method of claim 12, wherein the microscopy sub-video request further comprises a second microscopy parameter, wherein the second microscopy parameter comprises a first microscopy parameter other than the first microscopy parameter; the receiving a plurality of first microscopic sub-video information returned by the network device, where each first microscopic sub-video information includes a plurality of microscopic image information about a first microscopic parameter sequence corresponding to a sub-region, where the first microscopic parameter sequence is assigned by a plurality of assignments including the first microscopic parameter, and the method includes:
and receiving a plurality of first microscopic sub-video information returned by the network equipment, wherein each first microscopic sub-video information comprises a plurality of microscopic image information corresponding to a sub-region and related to a first microscopic parameter sequence, the first microscopic parameter sequence comprises a plurality of assignments of the first microscopic parameter, at least two microscopic sub-videos corresponding to the same sub-region exist in the plurality of first microscopic sub-video information, and assignments of second microscopic parameters of the at least two microscopic sub-video information are different.
15. The method of any of claims 12 to 14, wherein the identification information of the sub-region comprises at least any of:
the image processing device comprises a plurality of microscopic image information of the subareas, wherein each subarea corresponds to at least one microscopic image information;
key fields of the sub-region;
image information of the sub-region;
microscopic record information of the subareas;
unique identification code information of the sub-region;
the indication information of the subareas is used for indicating the range of each subarea in the image of the target object to which each subarea belongs.
16. The method of claim 15, wherein the plurality of identification information includes indication information of the plurality of sub-regions, wherein the indication information is used to indicate a range in which each sub-region is located in an image of a target object to which each sub-region belongs; wherein the method further comprises:
and determining the range of each subarea in the current image of the at least one target object based on the selected range operation of the user and generating the indication information.
17. A method for presenting a plurality of microscopic sub-video information, applied to a network device, wherein the method comprises:
Receiving a microscopic sub-video request sent by user equipment, wherein the microscopic sub-video request comprises identification information of a sub-region and a first microscopic parameter;
determining a plurality of corresponding first microscopic sub-video information according to the identification information of the subarea and the first microscopic parameters, wherein each first microscopic sub-video information comprises a plurality of microscopic image information of the corresponding subarea about a first microscopic parameter sequence, the first microscopic parameter sequence comprises a plurality of assignments corresponding to the first microscopic parameters, the first microscopic sub-video information comprises microscopic video information generated by sorting the plurality of microscopic images of the corresponding subarea according to assignment sequences in the first microscopic parameter sequence and specific video parameters, and a playing axis of the microscopic video information is video information of the first microscopic parameters;
returning the plurality of first microscopic sub-video information to the user device;
wherein the plurality of first microscopic sub-video information is presented at the user equipment side via the steps of:
generating corresponding regulation and control instructions based on regulation and control operations of the users on the plurality of first microscopic sub-video information, wherein the regulation and control instructions comprise switching the current first microscopic parameters to third microscopic parameters, and the third microscopic parameters comprise one of other first microscopic parameters except the current first microscopic parameters;
Acquiring a plurality of third microscopic sub-video information according to the regulation and control instruction, wherein each third microscopic sub-video information comprises a plurality of microscopic image information of a corresponding sub-region about a third microscopic parameter sequence, each third microscopic sub-video information corresponds to one first microscopic sub-video information in the plurality of first microscopic sub-videos, and the third microscopic parameter sequence comprises a plurality of assignments corresponding to the third microscopic parameters;
synchronously presenting the plurality of third microscopic sub-video information according to the third microscopic parameter sequence;
wherein the first microscopic parameters include:
focal plane height information;
rotation angle information;
pitch angle information;
yaw angle information;
brightness information of the illuminating lamp;
lighting light color information;
temperature information;
humidity information;
PH value information;
fluorescence band information;
polarized light angle information;
DIC rotation angle information.
18. The method of claim 17, wherein the determining the corresponding plurality of first microscopy sub-video information from the identification information of the sub-region and the first microscopy parameters, wherein each first microscopy sub-video information comprises a plurality of microscopy image information for the corresponding sub-region about a first sequence of microscopy parameters comprising a corresponding plurality of assignments of first microscopy parameters, comprises:
Determining a plurality of corresponding second microscopic sub-video information according to the identification information of the sub-region and the first microscopic parameters, wherein each second microscopic sub-video information is provided with a second microscopic parameter sequence of the corresponding sub-region, each second microscopic sub-video information comprises a plurality of microscopic image information of the corresponding second microscopic parameter sequence, and each second microscopic parameter sequence comprises a plurality of assignments corresponding to the plurality of first microscopic parameters;
if at least one public microscopic parameter subsequence exists in the second microscopic parameter sequences corresponding to the plurality of first microscopic sub-video information, determining a corresponding first microscopic parameter sequence according to the at least one public microscopic parameter subsequence, and determining first microscopic sub-video information corresponding to each piece of second microscopic sub-video information according to the first microscopic parameter sequence, wherein the first microscopic sub-video information belongs to a sub-video of the second microscopic sub-video information, and the first microscopic parameter sequence belongs to a sub-sequence of the second microscopic parameter sequence.
19. The method of claim 17, wherein the microscopy sub-video request further comprises second microscopy parameter information, wherein the second microscopy parameter information the second microscopy parameter comprises other first microscopy parameters in addition to the first microscopy parameter; the determining, according to the identification information of the sub-region and the first microscopic parameter, a corresponding plurality of first microscopic sub-video information, where each first microscopic sub-video information includes a plurality of microscopic image information about a first microscopic parameter sequence of the corresponding sub-region, and the first microscopic parameter sequence includes a plurality of assignments corresponding to the first microscopic parameter, including:
And determining a plurality of corresponding first microscopic sub-video information according to the identification information of the sub-region and the first microscopic parameters, wherein each first microscopic sub-video information comprises a plurality of microscopic image information of the corresponding sub-region about a first microscopic parameter sequence, the first microscopic parameter sequence comprises a plurality of assignments corresponding to the first microscopic parameters, at least two microscopic sub-videos corresponding to the same sub-region exist in the plurality of first microscopic sub-video information, and assignments of second microscopic parameter information of the at least two microscopic sub-video information are different.
20. A method for presenting a plurality of microscopic sub-video information, wherein the method comprises:
the method comprises the steps that user equipment sends microscopic sub-video requests about a plurality of subareas to network equipment, wherein the microscopic sub-video requests comprise identification information of the subareas and first microscopic parameters;
the network equipment receives the microscopic sub-video request, and determines a plurality of corresponding first microscopic sub-video information according to the identification information of the subarea and the first microscopic parameters, wherein each first microscopic sub-video information comprises a plurality of microscopic image information of the corresponding subarea relative to a first microscopic parameter sequence, the first microscopic parameter sequence comprises a plurality of assignments of the first microscopic parameters, the first microscopic sub-video information comprises microscopic video information which is generated by sorting a plurality of microscopic images of the corresponding subarea according to assignment sequences in the first microscopic parameter sequence and specific video parameters, and a playing axis of the microscopic video information is video information of the first microscopic parameters;
The network equipment returns the first microscopic sub-video information to the user equipment;
the user equipment receives a plurality of pieces of first microscopic sub-video information returned by the cloud, and synchronously presents the plurality of pieces of first microscopic sub-video information according to the first microscopic parameter sequence; generating corresponding regulation and control instructions based on regulation and control operations of the users on the plurality of first microscopic sub-video information, wherein the regulation and control instructions comprise switching the current first microscopic parameters to third microscopic parameters, and the third microscopic parameters comprise one of other first microscopic parameters except the current first microscopic parameters; acquiring a plurality of third microscopic sub-video information according to the regulation and control instruction, wherein each third microscopic sub-video information comprises a plurality of microscopic image information of a corresponding sub-region about a third microscopic parameter sequence, each third microscopic sub-video information corresponds to one first microscopic sub-video information in the plurality of first microscopic sub-videos, and the third microscopic parameter sequence comprises a plurality of assignments corresponding to the third microscopic parameters; wherein the synchronously presenting the plurality of first microscopic sub-video information according to the first microscopic parameter sequence includes: synchronously presenting the plurality of third microscopic sub-video information according to the third microscopic parameter sequence;
Wherein the first microscopic parameters include:
focal plane height information;
rotation angle information;
pitch angle information;
yaw angle information;
brightness information of the illuminating lamp;
lighting light color information;
temperature information;
humidity information;
PH value information;
fluorescence band information;
polarized light angle information;
DIC rotation angle information.
21. An apparatus for presenting a plurality of microscopic sub-videos, wherein the apparatus comprises:
the system comprises a one-to-one module, a first video processing module and a second video processing module, wherein the one-to-one module is used for acquiring a plurality of first microscopic sub-video information, each first microscopic sub-video information comprises a plurality of microscopic image information of a corresponding subarea relative to a first microscopic parameter sequence, the first microscopic parameter sequence comprises a plurality of assignments corresponding to first microscopic parameters, the first microscopic sub-video information comprises microscopic video information which is generated by sorting a plurality of microscopic images of the corresponding subarea according to assignment sequences in the first microscopic parameter sequence and specific video parameters, and a playing axis of the microscopic video information is video information of the first microscopic parameters;
the second module is used for synchronously presenting the plurality of first microscopic sub-video information according to the first microscopic parameter sequence;
the three modules are used for generating corresponding regulating and controlling instructions based on regulating and controlling operations of the users on the first microscopic sub-video information, wherein the regulating and controlling instructions comprise switching the current first microscopic parameters to third microscopic parameters, and the third microscopic parameters comprise one of other first microscopic parameters except the current first microscopic parameters;
The four modules are used for acquiring a plurality of third microscopic sub-video information according to the regulation and control instruction, wherein each third microscopic sub-video information comprises a plurality of microscopic image information of a corresponding sub-region relative to a third microscopic parameter sequence, each third microscopic sub-video information corresponds to one first microscopic sub-video information in the plurality of first microscopic sub-videos, and the third microscopic parameter sequence comprises a plurality of assignments corresponding to the third microscopic parameters;
the second module is used for synchronously presenting the plurality of third microscopic sub-video information according to the third microscopic parameter sequence;
wherein the first microscopic parameters include:
focal plane height information;
rotation angle information;
pitch angle information;
yaw angle information;
brightness information of the illuminating lamp;
lighting light color information;
temperature information;
humidity information;
PH value information;
fluorescence band information;
polarized light angle information;
DIC rotation angle information.
22. A network device for presenting a plurality of microscopic sub-video information, wherein the device comprises:
the second module is used for receiving a microscopic sub-video request sent by user equipment, wherein the microscopic sub-video request comprises identification information of a sub-region and a first microscopic parameter;
The second module is used for determining a plurality of corresponding first microscopic sub-video information according to the identification information of the subarea and the first microscopic parameters, wherein each first microscopic sub-video information comprises a plurality of microscopic image information of the corresponding subarea related to a first microscopic parameter sequence, the first microscopic parameter sequence comprises a plurality of assignments corresponding to the first microscopic parameters, the first microscopic sub-video information comprises microscopic video information which is generated by sorting a plurality of microscopic images of the corresponding subarea according to assignment sequences in the first microscopic parameter sequence and specific video parameters, and a playing axis of the microscopic video information is video information of the first microscopic parameters;
the second and third modules are used for returning the plurality of first microscopic sub-video information to the user equipment;
wherein the plurality of first microscopic sub-video information is presented at the user equipment side via the steps of:
generating corresponding regulation and control instructions based on regulation and control operations of the users on the plurality of first microscopic sub-video information, wherein the regulation and control instructions comprise switching the current first microscopic parameters to third microscopic parameters, and the third microscopic parameters comprise one of other first microscopic parameters except the current first microscopic parameters;
Acquiring a plurality of third microscopic sub-video information according to the regulation and control instruction, wherein each third microscopic sub-video information comprises a plurality of microscopic image information of a corresponding sub-region about a third microscopic parameter sequence, each third microscopic sub-video information corresponds to one first microscopic sub-video information in the plurality of first microscopic sub-videos, and the third microscopic parameter sequence comprises a plurality of assignments corresponding to the third microscopic parameters;
synchronously presenting the plurality of third microscopic sub-video information according to the third microscopic parameter sequence;
wherein the first microscopic parameters include:
focal plane height information;
rotation angle information;
pitch angle information;
yaw angle information;
brightness information of the illuminating lamp;
lighting light color information;
temperature information;
humidity information;
PH value information;
fluorescence band information;
polarized light angle information;
DIC rotation angle information.
23. An apparatus for presenting a plurality of microscopic sub-video information, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the operations of the method of any one of claims 1 to 18.
24. A computer readable medium storing instructions that, when executed, cause a system to perform the operations of the method of any one of claims 1 to 18.
CN202010171426.9A 2020-03-12 2020-03-12 Method and device for presenting multiple microscopic sub-video information Active CN113395483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010171426.9A CN113395483B (en) 2020-03-12 2020-03-12 Method and device for presenting multiple microscopic sub-video information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010171426.9A CN113395483B (en) 2020-03-12 2020-03-12 Method and device for presenting multiple microscopic sub-video information

Publications (2)

Publication Number Publication Date
CN113395483A CN113395483A (en) 2021-09-14
CN113395483B true CN113395483B (en) 2023-07-18

Family

ID=77615696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010171426.9A Active CN113395483B (en) 2020-03-12 2020-03-12 Method and device for presenting multiple microscopic sub-video information

Country Status (1)

Country Link
CN (1) CN113395483B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001013640A1 (en) * 1999-08-13 2001-02-22 Universal Imaging Corporation System and method for acquiring images at maximum acquisition rate while asynchronously sequencing microscope devices
JP2012190033A (en) * 2012-05-07 2012-10-04 Olympus Corp Microscopic photographing device and microscopic photographing device control method
CN103226823A (en) * 2013-03-18 2013-07-31 华中科技大学 Fast image registering method based on LSPT (Logarithmic Subtraction Point Template)
CN103460684A (en) * 2011-03-30 2013-12-18 佳能株式会社 Image processing apparatus, imaging system, and image processing system
WO2018096639A1 (en) * 2016-11-24 2018-05-31 株式会社ニコン Image processing device, microscope system, image processing method, and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140043462A1 (en) * 2012-02-10 2014-02-13 Inscopix, Inc. Systems and methods for distributed video microscopy

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001013640A1 (en) * 1999-08-13 2001-02-22 Universal Imaging Corporation System and method for acquiring images at maximum acquisition rate while asynchronously sequencing microscope devices
CN103460684A (en) * 2011-03-30 2013-12-18 佳能株式会社 Image processing apparatus, imaging system, and image processing system
JP2012190033A (en) * 2012-05-07 2012-10-04 Olympus Corp Microscopic photographing device and microscopic photographing device control method
CN103226823A (en) * 2013-03-18 2013-07-31 华中科技大学 Fast image registering method based on LSPT (Logarithmic Subtraction Point Template)
WO2018096639A1 (en) * 2016-11-24 2018-05-31 株式会社ニコン Image processing device, microscope system, image processing method, and program

Also Published As

Publication number Publication date
CN113395483A (en) 2021-09-14

Similar Documents

Publication Publication Date Title
CN108769517B (en) Method and equipment for remote assistance based on augmented reality
EP4394554A1 (en) Method for determining and presenting target mark information and apparatus
JP2021523347A (en) Reduced output behavior of time-of-flight cameras
CN113965665B (en) Method and equipment for determining virtual live image
CN114390201A (en) Focusing method and device thereof
CN113490063B (en) Method, device, medium and program product for live interaction
CN112822419A (en) Method and equipment for generating video information
CN113395483B (en) Method and device for presenting multiple microscopic sub-video information
CN113470167B (en) Method and device for presenting three-dimensional microscopic image
CN109636922B (en) Method and device for presenting augmented reality content
CN113657245B (en) Method, device, medium and program product for human face living body detection
CN113392267B (en) Method and device for generating two-dimensional microscopic video information of target object
CN113392675B (en) Method and equipment for presenting microscopic video information
Tsonkov et al. Objects Detection in an Image by Color Features
CN113393407B (en) Method and device for acquiring microscopic image information of sample
CN113470185B (en) Method and equipment for presenting three-dimensional microscopic image
WO2023048837A1 (en) Contextual usage control of cameras
CN113395509B (en) Method and apparatus for providing and presenting three-dimensional microscopic video information of a target object
CN113469865B (en) Method and equipment for acquiring microscopic image
CN113469864B (en) Method and equipment for acquiring microscopic image
CN113392674A (en) Method and equipment for regulating and controlling microscopic video information
CN113395484A (en) Method and equipment for presenting microscopic sub-video information of target object
CN110781416A (en) Method and device for providing landscape information
Mao et al. A deep learning approach to track Arabidopsis seedlings’ circumnutation from time-lapse videos
US20180184148A1 (en) Electronic device and operation method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant