CN111489295B - Image processing method, electronic device, and storage medium - Google Patents

Image processing method, electronic device, and storage medium Download PDF

Info

Publication number
CN111489295B
CN111489295B CN202010602204.8A CN202010602204A CN111489295B CN 111489295 B CN111489295 B CN 111489295B CN 202010602204 A CN202010602204 A CN 202010602204A CN 111489295 B CN111489295 B CN 111489295B
Authority
CN
China
Prior art keywords
preset
target object
target
attribute information
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010602204.8A
Other languages
Chinese (zh)
Other versions
CN111489295A (en
Inventor
刘美兰
施杨
叶春毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ping An Smart Healthcare Technology Co ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN202010602204.8A priority Critical patent/CN111489295B/en
Publication of CN111489295A publication Critical patent/CN111489295A/en
Application granted granted Critical
Publication of CN111489295B publication Critical patent/CN111489295B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to an image processing technology and provides an image processing method, an electronic device and a storage medium. The method comprises the steps of adding a mask layer to an original image, calculating the maximum magnification of a target object, converting the original image into a first preset object, adding an overlapping layer, calculating coordinate information of the center point of the target object in the first preset object, moving an amplification window of the overlapping layer to the position of the coordinate information, constructing a second preset object based on the mask layer, converting attribute information of sub-target objects corresponding to the target object into data in a preset format, filling the data into a set of the second preset object, and performing amplification operation on the preset amplification window according to the set of the second preset object after the data is filled. The invention can avoid the situation that the user waits for loading for a long time due to large data volume of the original image. In addition, the invention also relates to an image recognition technology and a block chain technology in artificial intelligence, and the invention can be applied to the field of intelligent medical treatment, thereby promoting the construction of intelligent cities.

Description

Image processing method, electronic device, and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an electronic device, and a storage medium.
Background
With the development of image display technology, modern medicine uses image display technology in the aspect of slice display, scans traditional pathological section, obtains digital pathological section, and doctor's accessible computer screen observes digital pathological section and looks over, compares with traditional glass section, and digital pathological section has that the display magnification is high, and the definition is higher, but because digital pathological section's data bulk is great, the doctor often needs to wait for longer loading time when enlargiing or reducing digital pathological section.
Disclosure of Invention
In view of the above, the present invention provides an image processing method, an electronic device and a storage medium, which aims to solve the technical problem that a doctor needs to wait for a long loading time when a digital pathological section is enlarged or reduced in the prior art.
To achieve the above object, the present invention provides an image processing method comprising:
an identification step: responding an image processing request sent by a user, acquiring an original image to be processed in the request, respectively inputting the original image into at least one image recognition model trained in advance, and recognizing coordinate information of each target object and attribute information of each target object in the original image;
a first processing step: when it is monitored that a user performs touch operation on any target object in the original image, adding a mask layer to the original image, acquiring target attribute information of the target object, calculating the maximum magnification of the target object on the current page in real time based on the target attribute information, and converting the original image into a first preset object based on the maximum magnification;
a calculation step: adding an overlay layer to the first preset object, calculating according to the target attribute information and a first preset calculation rule to obtain coordinate information of a central point of the target object in the first preset object, and moving a preset amplification window corresponding to the overlay layer to a position corresponding to the coordinate information of the central point of the target object in the first preset object; and
a second processing step: and constructing a second preset object based on the mask layer, acquiring attribute information of a plurality of sub-target objects corresponding to the target object, converting the attribute information of each sub-target object into data in a preset format and filling the data into a set of the second preset object, acquiring a target magnification factor triggered by a user based on the first preset object, and performing amplification operation on the preset amplification window according to the set of the second preset object filled with the data and the target magnification factor.
Preferably, the calculating, in real time, the maximum magnification of the target object in the current page based on the target attribute information includes:
acquiring a preset amplifying window and a width value of a current page, calculating to obtain a multiple ratio of the current page to the preset amplifying window based on a second preset calculation rule, and acquiring a width value and a height value of the target object from the target attribute information;
judging whether the width value of the target object is larger than the height value of the target object or not, and when the width value of the target object is judged to be larger than the height value of the target object, calculating by utilizing a third preset calculation rule and the multiple ratio to obtain the maximum magnification of the target object in the current page;
and when the width value of the target object is judged to be smaller than the height value of the target object, calculating by utilizing a fourth preset calculation rule and the multiple ratio to obtain the maximum magnification of the target object in the current page.
Preferably, the obtaining, by calculation according to the target attribute information and a first preset calculation rule, coordinate information of the center point of the target object in the first preset object includes:
acquiring a width value and a height value of the target object from the target attribute information, and an abscissa value and an ordinate value of the target object in the original image, acquiring a width value and a height value of a preset amplification window, and calculating to obtain an abscissa value of the center point of the target object in the first preset object based on the width value of the target object, the abscissa value of the target object in the original image and the width value of the preset amplification window;
and calculating to obtain a longitudinal coordinate value of the central point of the target object in the first preset object based on the height value of the target object, the longitudinal coordinate value of the target object in the original image and the height value of the preset amplification window.
Preferably, the obtaining of the target magnification factor triggered by the user based on the first preset object includes:
adding a scaling scale on a display page corresponding to the first preset object, and configuring a moving button on the scaling scale;
monitoring the information of the end point position of the mobile button dragged by the user, and calculating the length of the end point position of the mobile button from the original point position of the scaling scale;
and taking the magnification corresponding to the length as the target magnification.
Preferably, before the second processing step, the method further comprises:
and recognizing the coordinate information of the sub-target objects corresponding to the target object and the attribute information of each sub-target object by using at least one pre-trained sub-target recognition model.
To achieve the above object, the present invention also provides an electronic device, including: the image processing system comprises a memory and a processor, wherein an image processing program is stored on the memory, and the image processing program is executed by the processor to realize the following steps:
an identification step: responding an image processing request sent by a user, acquiring an original image to be processed in the request, respectively inputting the original image into at least one image recognition model trained in advance, and recognizing coordinate information of each target object and attribute information of each target object in the original image;
a first processing step: when it is monitored that a user performs touch operation on any target object in the original image, adding a mask layer to the original image, acquiring target attribute information of the target object, calculating the maximum magnification of the target object on the current page in real time based on the target attribute information, and converting the original image into a first preset object based on the maximum magnification;
a calculation step: adding an overlay layer to the first preset object, calculating according to the target attribute information and a first preset calculation rule to obtain coordinate information of a central point of the target object in the first preset object, and moving a preset amplification window corresponding to the overlay layer to a position corresponding to the coordinate information of the central point of the target object in the first preset object; and
a second processing step: and constructing a second preset object based on the mask layer, acquiring attribute information of a plurality of sub-target objects corresponding to the target object, converting the attribute information of each sub-target object into data in a preset format and filling the data into a set of the second preset object, acquiring a target magnification factor triggered by a user based on the first preset object, and performing amplification operation on the preset amplification window according to the set of the second preset object filled with the data and the target magnification factor.
Preferably, the calculating, in real time, the maximum magnification of the target object in the current page based on the target attribute information includes:
acquiring a preset amplifying window and a width value of a current page, calculating to obtain a multiple ratio of the current page to the preset amplifying window based on a second preset calculation rule, and acquiring a width value and a height value of the target object from the target attribute information;
judging whether the width value of the target object is larger than the height value of the target object or not, and when the width value of the target object is judged to be larger than the height value of the target object, calculating by utilizing a third preset calculation rule and the multiple ratio to obtain the maximum magnification of the target object in the current page;
and when the width value of the target object is judged to be smaller than the height value of the target object, calculating by utilizing a fourth preset calculation rule and the multiple ratio to obtain the maximum magnification of the target object in the current page.
Preferably, the obtaining, by calculation according to the target attribute information and a first preset calculation rule, coordinate information of the center point of the target object in the first preset object includes:
acquiring a width value and a height value of the target object from the target attribute information, and an abscissa value and an ordinate value of the target object in the original image, acquiring a width value and a height value of a preset amplification window, and calculating to obtain an abscissa value of the center point of the target object in the first preset object based on the width value of the target object, the abscissa value of the target object in the original image and the width value of the preset amplification window;
and calculating to obtain a longitudinal coordinate value of the central point of the target object in the first preset object based on the height value of the target object, the longitudinal coordinate value of the target object in the original image and the height value of the preset amplification window.
Preferably, the obtaining of the target magnification factor triggered by the user based on the first preset object includes:
adding a scaling scale on a display page corresponding to the first preset object, and configuring a moving button on the scaling scale;
monitoring the information of the end point position of the mobile button dragged by the user, and calculating the length of the end point position of the mobile button from the original point position of the scaling scale;
and taking the magnification corresponding to the length as the target magnification.
To achieve the above object, the present invention further provides a computer-readable storage medium, wherein the computer-readable storage medium includes a storage data area and a storage program area, the storage data area stores data created according to the use of the blockchain node, and the storage program area stores an image processing program, and when the image processing program is executed by a processor, the image processing method implements any of the steps of the image processing method described above.
The image processing method, the electronic device and the storage medium provided by the invention can enable a doctor to see various cells and the dense conditions thereof in a target object only by clicking a certain target object, enable the doctor to observe the target object more accurately and avoid waiting for long-time loading due to large data volume of an original image.
Drawings
FIG. 1 is a diagram of an electronic device according to a preferred embodiment of the present invention;
FIG. 2 is a block diagram illustrating a preferred embodiment of the image processing process of FIG. 1;
FIG. 3 is a flowchart of a preferred embodiment of an image processing method according to the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a schematic diagram of an electronic device 1 according to a preferred embodiment of the invention is shown.
The electronic device 1 includes but is not limited to: memory 11, processor 12, display 13, and network interface 14. The electronic device 1 is connected to a network through a network interface 14 to obtain raw data. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System for Mobile communications (GSM), Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, Bluetooth (Bluetooth), Wi-Fi, or a communication network.
The memory 11 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 11 may be an internal storage unit of the electronic device 1, such as a hard disk or a memory of the electronic device 1. In other embodiments, the memory 11 may also be an external storage device of the electronic apparatus 1, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided in the electronic apparatus 1. Of course, the memory 11 may also comprise both an internal memory unit of the electronic apparatus 1 and an external memory device thereof. In this embodiment, the memory 11 is generally used for storing an operating system installed in the electronic device 1 and various types of application software, such as program codes of the image processing program 10. Further, the memory 11 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 12 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 12 is generally used for controlling the overall operation of the electronic device 1, such as performing data interaction or communication related control and processing. In this embodiment, the processor 12 is configured to run the program code stored in the memory 11 or process data, for example, run the program code of the image processing program 10.
The display 13 may be referred to as a display screen or display unit. In some embodiments, the display 13 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an Organic Light-Emitting Diode (OLED) touch screen, or the like. The display 13 is used for displaying information processed in the electronic device 1 and for displaying a visual work interface, for example, results of data statistics.
The network interface 14 may optionally comprise a standard wired interface, a wireless interface (e.g. WI-FI interface), the network interface 14 typically being used for establishing a communication connection between the electronic apparatus 1 and other electronic devices.
Fig. 1 only shows the electronic device 1 with components 11-14 and the image processing program 10, but it is to be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead.
Optionally, the electronic device 1 may further comprise a user interface, the user interface may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface may further comprise a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an Organic Light-Emitting Diode (OLED) touch screen, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic apparatus 1 and for displaying a visualized user interface.
The electronic device 1 may further include a Radio Frequency (RF) circuit, a sensor, an audio circuit, and the like, which are not described in detail herein.
In the above embodiment, the processor 12, when executing the image processing program 10 stored in the memory 11, may implement the following steps:
an identification step: responding an image processing request sent by a user, acquiring an original image to be processed in the request, respectively inputting the original image into at least one image recognition model trained in advance, and recognizing coordinate information of each target object and attribute information of each target object in the original image;
a first processing step: when it is monitored that a user performs touch operation on any target object in the original image, adding a mask layer to the original image, acquiring target attribute information of the target object, calculating the maximum magnification of the target object on the current page in real time based on the target attribute information, and converting the original image into a first preset object based on the maximum magnification;
a calculation step: adding an overlay layer to the first preset object, calculating according to the target attribute information and a first preset calculation rule to obtain coordinate information of a central point of the target object in the first preset object, and moving a preset amplification window corresponding to the overlay layer to a position corresponding to the coordinate information of the central point of the target object in the first preset object; and
a second processing step: and constructing a second preset object based on the mask layer, acquiring attribute information of a plurality of sub-target objects corresponding to the target object, converting the attribute information of each sub-target object into data in a preset format and filling the data into a set of the second preset object, acquiring a target magnification factor triggered by a user based on the first preset object, and performing amplification operation on the preset amplification window according to the set of the second preset object filled with the data and the target magnification factor.
The storage device may be the memory 11 of the electronic apparatus 1, or may be another storage device communicatively connected to the electronic apparatus 1.
For a detailed description of the above steps, please refer to the following description of fig. 2 regarding a program module diagram of an embodiment of the image processing program 10 and fig. 3 regarding a flowchart of an embodiment of the image processing method.
In other embodiments, the image processing program 10 may be divided into a plurality of modules, which are stored in the memory 11 and executed by the processor 12 to accomplish the present invention. The modules referred to herein are referred to as a series of computer program instruction segments capable of performing specified functions.
Referring to fig. 2, a block diagram of an embodiment of the image processing program 10 of fig. 1 is shown. In the present embodiment, the image processing program 10 may be divided into: the system comprises an identification module 110, a first processing module 120, a calculation module 130 and a second processing module 140.
The identification module 110 is configured to respond to an image processing request sent by a user, acquire an original image to be processed in the request, input the original image into at least one image recognition model trained in advance, and identify coordinate information of each target object in the original image and attribute information of each target object.
In this embodiment, an original image to be processed is taken as a kidney pathology digital slide image as an example to explain the present solution, when an image processing request sent by a user is received, the request is responded, the original image to be processed in the request is obtained, the original image is input into at least one image recognition model trained in advance, coordinate information of each target object in the original image and attribute information of each target object are recognized, where a target object of the kidney pathology digital slide image may be a glomerulus, and the attribute information of the target object may include: coordinate position information, contour, and length and width of the circumscribed rectangle of all glomeruli in the kidney pathology digital slide image. It should be noted that different attribute information in the target object is identified by different models, and specifically, a mapping relationship table between each attribute information and the identification model may be established in advance, and the corresponding identification model may be searched according to the mapping relationship table. The image recognition technology is a mature technology, and a specific model training process is not described herein.
In an embodiment, coordinate information of the sub-target object corresponding to the target object and attribute information of each sub-target object may also be identified by using at least one pre-trained sub-target identification model. The sub-target object corresponding to the target object may be an intraglomerular cell within the glomerulus, and the attribute information of the sub-target object may be position coordinate information, a contour, and the like of the intraglomerular cell within the glomerulus.
The first processing module 120 is configured to, when it is monitored that a user performs a touch operation on any one target object in the original image, add a mask layer to the original image, obtain target attribute information of the target object, calculate, in real time, a maximum magnification factor of the target object on a current page based on the target attribute information, and convert the original image into a first preset object based on the maximum magnification factor.
In this embodiment, when it is monitored that a user performs a touch operation on any target object in an original image, in an actual application scenario, that is, when a doctor clicks a glomerulus in a renal pathology digital slide image with a mouse, a mask layer is added to the renal pathology digital slide image, the mask layer is a canvas in a slide viewer, and after the mask layer is added, two slide viewers are equivalently provided in a page, a large slide viewer can be used for viewing the glomerulus and tissue strips, and a small slide viewer can be used for viewing intraglomerular cells of the glomerulus.
And then, acquiring target attribute information of the target object, calculating the maximum magnification of the target object in the current interface in real time based on the target attribute information, and initializing the original image into a first preset object according to the calculated maximum magnification. Specifically, the attribute information of each glomerulus is traversed, the width and the height of a circumscribed rectangle of the glomerulus in a kidney pathology digital slide image clicked by a mouse are obtained, then the maximum magnification of the glomerulus which can be magnified in a small magnified window is calculated in real time, the kidney pathology digital slide image is converted into an OpenSeadrain object according to the calculated magnification, the maximum magnification is calculated, the glomerulus can occupy a magnification frame as much as possible, and the situation that the glomerulus is too small and is not clearly seen or is too large and is cut off is avoided.
Further, calculating the maximum magnification of the target object in the current page in real time based on the target attribute information comprises:
acquiring a preset amplifying window and a width value of a current page, calculating to obtain a multiple ratio of the current page to the preset amplifying window based on a second preset calculation rule, and acquiring a width value and a height value of the target object from the target attribute information;
judging whether the width value of the target object is larger than the height value of the target object or not, and when the width value of the target object is judged to be larger than the height value of the target object, calculating by utilizing a third preset calculation rule and the multiple ratio to obtain the maximum magnification of the target object in the current page;
and when the width value of the target object is judged to be smaller than the height value of the target object, calculating by utilizing a fourth preset calculation rule and the multiple ratio to obtain the maximum magnification of the target object in the current page.
Obtaining glomerular width from target attribute informationw 1 Is high and highh 1 Presetting the width and height of a magnification window to be m, setting the width of a current interface to be imgW, and setting the maximum magnification of the glomerulus in a magnification frame to be Z;
firstly, calculating the multiple ratio of the picture to the magnifying frame, namely multiple = imgW/m, wherein the multiple result value can be an integer;
judging againw 1 Andh 1 the magnitude relation of (1) whenw 1 >h 1 Z = m/(w + 5)/multiplex,
since the glomerulus is irregular, in order to prevent the glomerulus from being cut off in the enlarging frame, 5 can be added to the width or height of the glomerulus, so that the glomerulus can be displayed maximally and can not be cut off.
A calculating module 130, configured to add an overlay to the first preset object, obtain coordinate information of the center point of the target object in the first preset object according to the target attribute information and a first preset calculating rule, and move a preset amplification window corresponding to the overlay to a position corresponding to the coordinate information of the center point of the target object in the first preset object.
In this embodiment, in order to draw and display paths of all glomeruli for cells in the glomeruli, an overlay layer may be added to the first preset object, where the canvas is drawn on the overlay layer, specifically, an event handler that adds addHandler ('open') to the first preset object is initialized for the original image, when an open event is triggered, the overlay layer is added to the openseadagon canvas, coordinate information of a center point of the target object in the first preset object is obtained through calculation according to the target attribute information and a first preset calculation rule, and a preset amplification window corresponding to the overlay layer is moved to a position where the center point of the target object corresponds to the coordinate information in the first preset object.
Further, the step of calculating the coordinate information of the center point of the target object in the first preset object according to the target attribute information and a first preset calculation rule includes:
acquiring a width value and a height value of the target object from the target attribute information, and an abscissa value and an ordinate value of the target object in the original image, acquiring a width value and a height value of a preset amplification window, and calculating to obtain an abscissa value of the center point of the target object in the first preset object based on the width value of the target object, the abscissa value of the target object in the original image and the width value of the preset amplification window;
and calculating to obtain a longitudinal coordinate value of the central point of the target object in the first preset object based on the height value of the target object, the longitudinal coordinate value of the target object in the original image and the height value of the preset amplification window.
Glomerular width obtained from attribute informationw 1 High, highh 1 Coordinates (x, y) in the original image, and a predefined width of the magnification windowW 0 Is high and highH 0 An abscissa value of the center point of the target object in the first preset objectX m =x+(w 1 /2)*(W 0 /w 1 ) Longitudinal coordinate value of target object center point in first preset objectY m =y+(h 1 /2)*(H 0 /h 1 ) And then calling a viewport () method to move the preset magnifying window to the position of the coordinate point.
The second processing module 140 is configured to construct a second preset object based on the mask layer and store the second preset object in a preset storage path, acquire attribute information of a plurality of sub-target objects corresponding to the target object, convert the attribute information of each sub-target object into data in a preset format and fill the data in a set of the second preset object, acquire a target magnification factor triggered by a user based on the first preset object, and perform a magnification operation on the preset magnification window according to the set of the second preset object filled with the data and the target magnification factor.
In this embodiment, a second preset object is constructed based on the mask layer and stored in the preset storage path, the second preset object is a raphail object, the purpose of constructing the raphail object is that after the attribute information of the cells in the sphere is converted into the data in the svg format, the various types of cell paths in the sphere corresponding to various colors can be displayed in the canvas, the constructed raphail object is cached, then, the attribute information of a plurality of sub-target objects corresponding to the target object is obtained, the sub-target objects can be the cells in the sphere corresponding to the glomeruli, and the attribute information of the sub-target objects can be coordinate information, outlines and the like. Converting the attribute information of each sub-target object into data in a preset format and filling the data in a second preset object, converting the attribute information of each sub-target object into data in an svg format and filling the data in a set of the Raphael object, acquiring a magnification factor triggered by a user according to the first preset object, and executing magnification operation on a preset magnification window according to the set of the Raphael object filled with the data and the magnification factor triggered by the user. And when the switching operation triggered by the user is monitored, switching the current page to the interface display corresponding to the switching operation. When the amplification window of the glomerulus is in an open state, the M-type cell distribution diagram in the glomerulus is initially displayed, when a user presses down a switching tab, the integral of the mesangium can be checked, and when the amplification window of the glomerulus is in an open state, the user clicks another glomerulus, and the image of the glomerulus in the amplification window is switched to the currently clicked amplified image of the glomerulus.
Further, obtaining the target magnification factor triggered by the user based on the first preset object comprises:
adding a scaling scale on a display page corresponding to the first preset object, configuring a moving button on the scaling scale, monitoring the information of the end point position of the moving button dragged by a user, calculating the length between the end point position of the moving button and the original point position of the scaling scale, and taking the magnification corresponding to the length as the target magnification.
In addition, the invention also provides an image processing method. Fig. 3 is a schematic method flow diagram of an embodiment of the image processing method according to the present invention. The processor 12 of the electronic device 1, when executing the image processing program 10 stored in the memory 11, implements the following steps of the image processing method:
step S10: responding an image processing request sent by a user, acquiring an original image to be processed in the request, respectively inputting the original image into at least one image recognition model trained in advance, and recognizing the coordinate information of each target object in the original image and the attribute information of each target object.
In this embodiment, an original image to be processed is taken as a kidney pathology digital slide image as an example to explain the present solution, when an image processing request sent by a user is received, the request is responded, the original image to be processed in the request is obtained, the original image is input into at least one image recognition model trained in advance, coordinate information of each target object in the original image and attribute information of each target object are recognized, where a target object of the kidney pathology digital slide image may be a glomerulus, and the attribute information of the target object may include: coordinate position information, contour, and length and width of the circumscribed rectangle of all glomeruli in the kidney pathology digital slide image. It should be noted that different attribute information in the target object is identified by different models, and specifically, a mapping relationship table between each attribute information and the identification model may be established in advance, and the corresponding identification model may be searched according to the mapping relationship table. The image recognition technology is a mature technology, and a specific model training process is not described herein.
In an embodiment, coordinate information of the sub-target object corresponding to the target object and attribute information of each sub-target object may also be identified by using at least one pre-trained sub-target identification model. The sub-target object corresponding to the target object may be an intraglomerular cell within the glomerulus, and the attribute information of the sub-target object may be position coordinate information, a contour, and the like of the intraglomerular cell within the glomerulus.
Step S20: when it is monitored that a user performs touch operation on any target object in the original image, a mask layer is added to the original image, target attribute information of the target object is obtained, the maximum magnification factor of the target object on the current page is calculated in real time based on the target attribute information, and the original image is converted into a first preset object based on the maximum magnification factor.
In this embodiment, when it is monitored that a user performs a touch operation on any target object in an original image, in an actual application scenario, that is, when a doctor clicks a glomerulus in a renal pathology digital slide image with a mouse, a mask layer is added to the renal pathology digital slide image, the mask layer is a canvas in a slide viewer, and after the mask layer is added, two slide viewers are equivalently provided in a page, a large slide viewer can be used for viewing the glomerulus and tissue strips, and a small slide viewer can be used for viewing intraglomerular cells of the glomerulus.
And then, acquiring target attribute information of the target object, calculating the maximum magnification of the target object in the current interface in real time based on the target attribute information, and initializing the original image into a first preset object according to the calculated maximum magnification. Specifically, the attribute information of each glomerulus is traversed, the width and the height of a circumscribed rectangle of the glomerulus in a kidney pathology digital slide image clicked by a mouse are obtained, then the maximum magnification of the glomerulus which can be magnified in a small magnified window is calculated in real time, the kidney pathology digital slide image is converted into an OpenSeadrain object according to the calculated magnification, the maximum magnification is calculated, the glomerulus can occupy a magnification frame as much as possible, and the situation that the glomerulus is too small and is not clearly seen or is too large and is cut off is avoided.
Further, calculating the maximum magnification of the target object in the current page in real time based on the target attribute information comprises:
acquiring a preset amplifying window and a width value of a current page, calculating to obtain a multiple ratio of the current page to the preset amplifying window based on a second preset calculation rule, and acquiring a width value and a height value of the target object from the target attribute information;
judging whether the width value of the target object is larger than the height value of the target object or not, and when the width value of the target object is judged to be larger than the height value of the target object, calculating by utilizing a third preset calculation rule and the multiple ratio to obtain the maximum magnification of the target object in the current page;
and when the width value of the target object is judged to be smaller than the height value of the target object, calculating by utilizing a fourth preset calculation rule and the multiple ratio to obtain the maximum magnification of the target object in the current page.
Obtaining glomerular width from target attribute informationw 1 Is high and highh 1 Presetting the width and height of a magnification window to be m, setting the width of a current interface to be imgW, and setting the maximum magnification of the glomerulus in a magnification frame to be Z;
firstly, calculating the multiple ratio of the picture to the magnifying frame, namely multiple = imgW/m, wherein the multiple result value can be an integer;
judging againw 1 Andh 1 the magnitude relation of (1) whenw 1 >h 1 Z = m/(w + 5)/multiplex,
since the glomerulus is irregular, in order to prevent the glomerulus from being cut off in the enlarging frame, 5 can be added to the width or height of the glomerulus, so that the glomerulus can be displayed maximally and can not be cut off.
Step S30: adding an overlay layer to the first preset object, calculating according to the target attribute information and a first preset calculation rule to obtain coordinate information of the central point of the target object in the first preset object, and moving a preset amplification window corresponding to the overlay layer to a position corresponding to the coordinate information of the central point of the target object in the first preset object.
In this embodiment, in order to draw and display paths of all glomeruli for cells in the glomeruli, an overlay layer may be added to the first preset object, where the canvas is drawn on the overlay layer, specifically, an event handler that adds addHandler ('open') to the first preset object is initialized for the original image, when an open event is triggered, the overlay layer is added to the openseadagon canvas, coordinate information of a center point of the target object in the first preset object is obtained through calculation according to the target attribute information and a first preset calculation rule, and a preset amplification window corresponding to the overlay layer is moved to a position where the center point of the target object corresponds to the coordinate information in the first preset object.
Further, the step of calculating the coordinate information of the center point of the target object in the first preset object according to the target attribute information and a first preset calculation rule includes:
acquiring a width value and a height value of the target object from the target attribute information, and an abscissa value and an ordinate value of the target object in the original image, acquiring a width value and a height value of a preset amplification window, and calculating to obtain an abscissa value of the center point of the target object in the first preset object based on the width value of the target object, the abscissa value of the target object in the original image and the width value of the preset amplification window;
and calculating to obtain a longitudinal coordinate value of the central point of the target object in the first preset object based on the height value of the target object, the longitudinal coordinate value of the target object in the original image and the height value of the preset amplification window.
Glomerular width obtained from attribute informationw 1 High, highh 1 Coordinates (x, y) in the original image, and a predefined width of the magnification windowW 0 Is high and highH 0 An abscissa value of the center point of the target object in the first preset objectX m =x+(w 1 /2)*(W 0 /w 1 ) Longitudinal coordinate value of target object center point in first preset objectY m =y+(h 1 /2)*(H 0 /h 1 ) And then calling a viewport () method to move the preset magnifying window to the position of the coordinate point.
Step S40: and constructing a second preset object based on the mask layer and storing the second preset object to a preset storage path, acquiring attribute information of a plurality of sub-target objects corresponding to the target object, converting the attribute information of each sub-target object into data in a preset format and filling the data into a set of the second preset object, acquiring a target magnification factor triggered by a user based on the first preset object, and performing amplification operation on the preset amplification window according to the set of the second preset object filled with the data and the target magnification factor.
In this embodiment, a second preset object is constructed based on the mask layer and stored in the preset storage path, the second preset object is a raphail object, the purpose of constructing the raphail object is that after the attribute information of the cells in the sphere is converted into the data in the svg format, the various types of cell paths in the sphere corresponding to various colors can be displayed in the canvas, the constructed raphail object is cached, then, the attribute information of a plurality of sub-target objects corresponding to the target object is obtained, the sub-target objects can be the cells in the sphere corresponding to the glomeruli, and the attribute information of the sub-target objects can be coordinate information, outlines and the like. Converting the attribute information of each sub-target object into data in a preset format and filling the data in a second preset object, converting the attribute information of each sub-target object into data in an svg format and filling the data in a set of the Raphael object, acquiring a magnification factor triggered by a user according to the first preset object, and executing magnification operation on a preset magnification window according to the set of the Raphael object filled with the data and the magnification factor triggered by the user. And when the switching operation triggered by the user is monitored, switching the current page to the interface display corresponding to the switching operation. When the amplification window of the glomerulus is in an open state, the M-type cell distribution diagram in the glomerulus is initially displayed, when a user presses down a switching tab, the integral of the mesangium can be checked, and when the amplification window of the glomerulus is in an open state, the user clicks another glomerulus, and the image of the glomerulus in the amplification window is switched to the currently clicked amplified image of the glomerulus.
Further, obtaining the target magnification factor triggered by the user based on the first preset object comprises:
adding a scaling scale on a display page corresponding to the first preset object, configuring a moving button on the scaling scale, monitoring the information of the end point position of the moving button dragged by a user, calculating the length between the end point position of the moving button and the original point position of the scaling scale, and taking the magnification corresponding to the length as the target magnification.
Furthermore, the embodiment of the present invention also provides a computer-readable storage medium, which may be any one or any combination of a hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, and the like. The computer readable storage medium includes a storage data area and a storage program area, the storage data area stores data created according to the use of the blockchain node, the storage program area stores an image processing program 10, and the image processing program 10 realizes the following operations when being executed by a processor:
an identification step: responding an image processing request sent by a user, acquiring an original image to be processed in the request, respectively inputting the original image into at least one image recognition model trained in advance, and recognizing coordinate information of each target object and attribute information of each target object in the original image;
a first processing step: when it is monitored that a user performs touch operation on any target object in the original image, adding a mask layer to the original image, acquiring target attribute information of the target object, calculating the maximum magnification of the target object on the current page in real time based on the target attribute information, and converting the original image into a first preset object based on the maximum magnification;
a calculation step: adding an overlay layer to the first preset object, calculating according to the target attribute information and a first preset calculation rule to obtain coordinate information of a central point of the target object in the first preset object, and moving a preset amplification window corresponding to the overlay layer to a position corresponding to the coordinate information of the central point of the target object in the first preset object; and
a second processing step: and constructing a second preset object based on the mask layer, acquiring attribute information of a plurality of sub-target objects corresponding to the target object, converting the attribute information of each sub-target object into data in a preset format and filling the data into a set of the second preset object, acquiring a target magnification factor triggered by a user based on the first preset object, and performing amplification operation on the preset amplification window according to the set of the second preset object filled with the data and the target magnification factor.
In one embodiment, the image processing method provided by the invention can be applied to the fields of intelligent medical treatment and the like, so that the construction of a smart city is promoted.
In another embodiment, in order to further ensure the privacy and security of all the data, all the data may be stored in a node of a block chain. Such as the original image to be processed, or the attribute information of the original image target object, etc., which may be stored in the block link points.
It should be noted that the blockchain in the present invention is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The embodiment of the computer readable storage medium of the present invention is substantially the same as the embodiment of the image processing method, and will not be described herein again.
It should be noted that the above-mentioned numbers of the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention essentially or contributing to the prior art can be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above and includes several instructions for enabling a terminal device (such as a mobile phone, a computer, an electronic device, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An image processing method applied to an electronic device, the method comprising:
an identification step: responding an image processing request sent by a user, acquiring an original image to be processed in the request, respectively inputting the original image into at least one image recognition model trained in advance, and recognizing coordinate information of each target object and attribute information of each target object in the original image;
a first processing step: when it is monitored that a user performs touch operation on any target object in the original image, adding a mask layer to the original image, acquiring target attribute information of the target object, calculating the maximum magnification of the target object on the current page in real time based on the target attribute information, and converting the original image into a first preset object based on the maximum magnification;
a calculation step: adding an overlay layer to the first preset object, calculating according to the target attribute information and a first preset calculation rule to obtain coordinate information of a central point of the target object in the first preset object, and moving a preset amplification window corresponding to the overlay layer to a position corresponding to the coordinate information of the central point of the target object in the first preset object; and
a second processing step: and constructing a second preset object based on the mask layer, acquiring attribute information of a plurality of sub-target objects corresponding to the target object, converting the attribute information of each sub-target object into data in a preset format and filling the data into a set of the second preset object, acquiring a target magnification factor triggered by a user based on the first preset object, and performing amplification operation on the preset amplification window according to the set of the second preset object filled with the data and the target magnification factor.
2. The image processing method of claim 1, wherein said calculating in real-time a maximum magnification of the target object in the current page based on the target attribute information comprises:
acquiring a preset amplifying window and a width value of a current page, calculating to obtain a multiple ratio of the current page to the preset amplifying window based on a second preset calculation rule, and acquiring a width value and a height value of the target object from the target attribute information;
judging whether the width value of the target object is larger than the height value of the target object or not, and when the width value of the target object is judged to be larger than the height value of the target object, calculating by utilizing a third preset calculation rule and the multiple ratio to obtain the maximum magnification of the target object in the current page;
and when the width value of the target object is judged to be smaller than the height value of the target object, calculating by utilizing a fourth preset calculation rule and the multiple ratio to obtain the maximum magnification of the target object in the current page.
3. The image processing method according to claim 1, wherein the calculating the coordinate information of the center point of the target object in the first preset object according to the target attribute information and a first preset calculation rule includes:
acquiring a width value and a height value of the target object from the target attribute information, and an abscissa value and an ordinate value of the target object in the original image, acquiring a width value and a height value of a preset amplification window, and calculating to obtain an abscissa value of the center point of the target object in the first preset object based on the width value of the target object, the abscissa value of the target object in the original image and the width value of the preset amplification window;
and calculating to obtain a longitudinal coordinate value of the central point of the target object in the first preset object based on the height value of the target object, the longitudinal coordinate value of the target object in the original image and the height value of the preset amplification window.
4. The image processing method of claim 1, wherein the obtaining a user-triggered target magnification based on a first preset object comprises:
adding a scaling scale on a display page corresponding to the first preset object, and configuring a moving button on the scaling scale;
monitoring the information of the end point position of the mobile button dragged by the user, and calculating the length of the end point position of the mobile button from the original point position of the scaling scale;
and taking the magnification corresponding to the length as the target magnification.
5. The image processing method according to claim 1, wherein before the second processing step, the method further comprises:
and recognizing the coordinate information of the sub-target objects corresponding to the target object and the attribute information of each sub-target object by using at least one pre-trained sub-target recognition model.
6. An electronic device, comprising a memory and a processor, wherein an image processing program is stored in the memory, and the image processing program is executed by the processor, and the following steps are implemented:
an identification step: responding an image processing request sent by a user, acquiring an original image to be processed in the request, respectively inputting the original image into at least one image recognition model trained in advance, and recognizing coordinate information of each target object and attribute information of each target object in the original image;
a first processing step: when it is monitored that a user performs touch operation on any target object in the original image, adding a mask layer to the original image, acquiring target attribute information of the target object, calculating the maximum magnification of the target object on the current page in real time based on the target attribute information, and converting the original image into a first preset object based on the maximum magnification;
a calculation step: adding an overlay layer to the first preset object, calculating according to the target attribute information and a first preset calculation rule to obtain coordinate information of a central point of the target object in the first preset object, and moving a preset amplification window corresponding to the overlay layer to a position corresponding to the coordinate information of the central point of the target object in the first preset object; and
a second processing step: and constructing a second preset object based on the mask layer, acquiring attribute information of a plurality of sub-target objects corresponding to the target object, converting the attribute information of each sub-target object into data in a preset format and filling the data into a set of the second preset object, acquiring a target magnification factor triggered by a user based on the first preset object, and performing amplification operation on the preset amplification window according to the set of the second preset object filled with the data and the target magnification factor.
7. The electronic device of claim 6, wherein the calculating, in real-time, a maximum magnification of the target object in a current page based on the target attribute information comprises:
acquiring a preset amplifying window and a width value of a current page, calculating to obtain a multiple ratio of the current page to the preset amplifying window based on a second preset calculation rule, and acquiring a width value and a height value of the target object from the target attribute information;
judging whether the width value of the target object is larger than the height value of the target object or not, and when the width value of the target object is judged to be larger than the height value of the target object, calculating by utilizing a third preset calculation rule and the multiple ratio to obtain the maximum magnification of the target object in the current page;
and when the width value of the target object is judged to be smaller than the height value of the target object, calculating by utilizing a fourth preset calculation rule and the multiple ratio to obtain the maximum magnification of the target object in the current page.
8. The electronic device of claim 6, wherein the calculating the coordinate information of the center point of the target object in the first preset object according to the target attribute information and a first preset calculation rule includes:
acquiring a width value and a height value of the target object from the target attribute information, and an abscissa value and an ordinate value of the target object in the original image, acquiring a width value and a height value of a preset amplification window, and calculating to obtain an abscissa value of the center point of the target object in the first preset object based on the width value of the target object, the abscissa value of the target object in the original image and the width value of the preset amplification window;
and calculating to obtain a longitudinal coordinate value of the central point of the target object in the first preset object based on the height value of the target object, the longitudinal coordinate value of the target object in the original image and the height value of the preset amplification window.
9. The electronic device of claim 6, wherein the obtaining a user-triggered target magnification based on the first preset object comprises:
adding a scaling scale on a display page corresponding to the first preset object, and configuring a moving button on the scaling scale;
monitoring the information of the end point position of the mobile button dragged by the user, and calculating the length of the end point position of the mobile button from the original point position of the scaling scale;
and taking the magnification corresponding to the length as the target magnification.
10. A computer-readable storage medium, comprising a storage data area storing data created according to use of a blockchain node and a storage program area storing an image processing program, which when executed by a processor implements the steps of the image processing method according to any one of claims 1 to 5.
CN202010602204.8A 2020-06-29 2020-06-29 Image processing method, electronic device, and storage medium Active CN111489295B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010602204.8A CN111489295B (en) 2020-06-29 2020-06-29 Image processing method, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010602204.8A CN111489295B (en) 2020-06-29 2020-06-29 Image processing method, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN111489295A CN111489295A (en) 2020-08-04
CN111489295B true CN111489295B (en) 2020-11-17

Family

ID=71810606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010602204.8A Active CN111489295B (en) 2020-06-29 2020-06-29 Image processing method, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN111489295B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446823B (en) * 2021-02-01 2021-04-27 武汉中科通达高新技术股份有限公司 Monitoring image display method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107786904A (en) * 2017-10-30 2018-03-09 深圳Tcl数字技术有限公司 Picture amplification method, display device and computer-readable recording medium
CN110941375A (en) * 2019-11-26 2020-03-31 腾讯科技(深圳)有限公司 Method and device for locally amplifying image and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8467601B2 (en) * 2010-09-15 2013-06-18 Kyran Daisy Systems, methods, and media for creating multiple layers from an image
CN106055247A (en) * 2016-05-25 2016-10-26 努比亚技术有限公司 Picture display device, method and mobile terminal
CN110264401A (en) * 2019-05-16 2019-09-20 平安科技(深圳)有限公司 Continuous type image magnification method, device and storage medium based on radial basis function

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107786904A (en) * 2017-10-30 2018-03-09 深圳Tcl数字技术有限公司 Picture amplification method, display device and computer-readable recording medium
CN110941375A (en) * 2019-11-26 2020-03-31 腾讯科技(深圳)有限公司 Method and device for locally amplifying image and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Image Zooming Method with Hierarchical Structure;Yihan Xiao et al.;《2013 international conference on information science and cloud computing companion》;20131207;第787-792页 *
学习OPENSEADRAGON之二(界面缩放与平移规则设置);英杰同学;《博客园 https://www.cnblogs.com/yingjiehit/p/4365225.html》;20150325;第1-4页 *
教学型PACS系统中的医学图像处理的相关技术研究;赵崇;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140915(第9期);第I138-1113页 *

Also Published As

Publication number Publication date
CN111489295A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
CN111932482A (en) Method and device for detecting target object in image, electronic equipment and storage medium
CN111831182B (en) Application icon control method and device and electronic equipment
CN106658139B (en) Focus control method and device
CN113126862B (en) Screen capture method and device, electronic equipment and readable storage medium
CN113761012A (en) Analysis visualization method of remote sensing data, server and storage medium
CN111291753A (en) Image-based text recognition method and device and storage medium
CN111489295B (en) Image processing method, electronic device, and storage medium
CN112016502A (en) Safety belt detection method and device, computer equipment and storage medium
KR20210106024A (en) Capture and save magnified images
CN113407144B (en) Display control method and device
CN111402066A (en) Data processing method, server and storage medium
CN111078491B (en) Monitoring information display method and device, monitoring terminal and computer storage medium
JP5563545B2 (en) Information processing apparatus and method
CN112333329A (en) Unread information reminding method and device and electronic equipment
CN112333389B (en) Image display control method and device and electronic equipment
CN114995914A (en) Picture data processing method and device, computer equipment and storage medium
CN114238528A (en) Map loading method and device, electronic equipment and storage medium
CN110442663B (en) Raster data batch clipping method and device and computer readable storage medium
CN107577398B (en) Interface animation control method, equipment and storage medium
CN108846883B (en) Quick drawing method and system for fractal graph, user equipment and storage medium
CN113791426A (en) Radar P display interface generation method and device, computer equipment and storage medium
CN111625693A (en) Data processing method, device, equipment and computer readable storage medium
CN113362227A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112765946A (en) Chart display method and device and electronic equipment
CN111796736A (en) Application sharing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231030

Address after: Room 2601 (Unit 07), Qianhai Free Trade Building, No. 3048, Xinghai Avenue, Nanshan Street, Qianhai Shenzhen-Hong Kong Cooperation Zone, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Ping An Smart Healthcare Technology Co.,Ltd.

Address before: 1-34 / F, Qianhai free trade building, 3048 Xinghai Avenue, Mawan, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong 518000

Patentee before: Ping An International Smart City Technology Co.,Ltd.

TR01 Transfer of patent right