CN110650210B - Image data acquisition method, device and storage medium - Google Patents
Image data acquisition method, device and storage medium Download PDFInfo
- Publication number
- CN110650210B CN110650210B CN201910974450.3A CN201910974450A CN110650210B CN 110650210 B CN110650210 B CN 110650210B CN 201910974450 A CN201910974450 A CN 201910974450A CN 110650210 B CN110650210 B CN 110650210B
- Authority
- CN
- China
- Prior art keywords
- data
- client
- image data
- type
- data packet
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Epidemiology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
The application discloses an image data acquisition method and device, and belongs to the technical field of computers. In the application, one or more image data identifiers can be extracted from a plurality of data packets transmitted between a first client and a server, and the extracted image data identifiers are sent to a second client, so that the second client can obtain corresponding image data according to the received image data identifiers. Because the image data identification is extracted from the data packet transmitted between the first client and the server, the accuracy is relatively high, the condition that the identification cannot be realized does not exist, the success rate of acquiring the image data by the second client is ensured, and the diagnosis efficiency is improved.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for acquiring image data, and a storage medium.
Background
With the advent of the digital information age and the development of artificial intelligence, the medical industry has also made great progress, for example, diagnostic imaging devices can obtain image data of various parts in the human body, and when a doctor examines the obtained image data, the image data can be subjected to auxiliary analysis by an artificial intelligence auxiliary diagnostic system, thereby realizing more convenient and accurate treatment.
In the related art, a terminal can acquire and display image data from a server of a medical image information system through a client of the medical image information system, and perform auxiliary analysis on the displayed image data through a client of an artificial intelligence auxiliary diagnosis system. Because the medical image information system and the artificial intelligence auxiliary diagnosis system belong to two different systems, the difficulty of directly displaying the image data displayed by the medical image information system in the artificial intelligence auxiliary diagnosis system is higher. In this case, the terminal may recognize an identifier of image data currently displayed by the client of the medical image information system through optical character recognition, and then download the image data through the identifier, and display the downloaded image data at the client of the artificial intelligence auxiliary diagnosis system, thereby performing auxiliary analysis.
At present, when a terminal identifies an image data identifier in an optical character identification mode in the related art, a recognition error or an unrecognizable condition exists, which affects a client of an artificial intelligence auxiliary diagnosis system to acquire image data, thereby reducing the diagnosis efficiency.
Disclosure of Invention
The embodiment of the application provides an image data acquisition method, an image data acquisition device and a storage medium, and solves the problem that when an image data identifier is identified in an optical character identification mode, identification errors or identification cannot be achieved. The technical scheme is as follows:
in one aspect, an image data acquiring method is provided, and the method includes:
acquiring a plurality of data packets transmitted between a first client and a server, wherein the first client is a client of a medical image information system, and the server is a server of the medical image information system;
extracting one or more image data identifications from the plurality of data packets;
and sending the one or more image data identifications to a second client so that the second client can obtain corresponding image data according to the one or more image data identifications, wherein the second client is a client of an artificial intelligence auxiliary diagnosis system.
Optionally, the extracting one or more image data identifiers from the plurality of data packets includes:
screening the plurality of data packets to obtain one or more target data packets, wherein the one or more target data packets are data packets sent when the first client requests data from the server;
and determining the one or more image data identifications according to the one or more target data packets.
Optionally, the screening the plurality of data packets to obtain one or more target data packets includes:
acquiring the data packet type of each data packet from the data packet information of each data packet;
and taking one or more data packets with the data packet type of a first type as the one or more target data packets, wherein the first type is the type of the data packet sent when the first client requests data from the server.
Optionally, the determining the one or more image data identifiers according to the one or more target data packets includes:
extracting application layer information of each target data packet from each target data packet;
segmenting the application layer information of each target data packet to obtain a plurality of data slices of each target data packet;
acquiring data slices of which the slice types are a second type and a third type from a plurality of data slices of each target data packet;
determining the one or more image data identifiers from the acquired data slices.
Optionally, the determining the one or more image data identifiers according to the acquired data slice includes:
extracting a first address field from the data slice with the second type in the slice type in the plurality of data slices of each target data packet, wherein the first address field is used for indicating a storage path of image data corresponding to the corresponding data packet;
extracting a second address field from the data slice with the slice type of the third type in the plurality of data slices of each target data packet, wherein the second address field is address information of the server;
and generating an image data identifier corresponding to each target data packet according to the first address field and the second address field extracted from each target data packet.
In another aspect, an image data acquiring apparatus is provided, the apparatus including:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a plurality of data packets transmitted between a first client and a server, the first client is a client of a medical image information system, and the server is a server of the medical image information system;
the extraction module is used for extracting one or more image data identifications from the plurality of data packets;
and the sending module is used for sending the one or more image data identifications to a second client so that the second client can obtain corresponding image data according to the one or more image data identifications, and the second client is a client of an artificial intelligence auxiliary diagnosis system.
Optionally, the extracting module includes:
the screening unit is used for screening the plurality of data packets to obtain one or more target data packets, and the one or more target data packets are data packets sent when the first client requests data from the server;
and the determining unit is used for determining the one or more image data identifications according to the one or more target data packets.
Optionally, the screening unit is specifically configured to:
acquiring the data packet type of each data packet from the data packet information of each data packet;
and taking one or more data packets with the data packet type of a first type as the one or more target data packets, wherein the first type is the type of the data packet sent when the first client requests data from the server.
Optionally, the determining unit includes:
an extraction subunit, configured to extract application layer information of each target packet from each target packet;
the segmentation subunit is used for segmenting the application layer information of each target data packet to obtain a plurality of data slices of each target data packet;
an acquisition subunit, configured to acquire, from the plurality of data slices of each target data packet, data slices of which slice types are a second type and a third type;
and the determining subunit is used for determining the one or more image data identifications according to the acquired data slices.
Optionally, the determining subunit is specifically configured to:
extracting a first address field from the data slice with the second type in the slice type in the plurality of data slices of each target data packet, wherein the first address field is used for indicating a storage path of image data corresponding to the corresponding data packet;
extracting a second address field from the data slice with the slice type of the third type in the plurality of data slices of each target data packet, wherein the second address field is address information of the server;
and generating an image data identifier corresponding to each target data packet according to the first address field and the second address field extracted from each target data packet.
In another aspect, an image data acquisition apparatus is provided, the apparatus comprising a processor, a communication interface, a memory, and a communication bus;
the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing computer programs;
the processor is used for executing the program stored in the memory so as to realize the image data acquisition method.
In another aspect, a computer-readable storage medium is provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the image data acquiring method provided in the foregoing.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
in the embodiment of the application, one or more image data identifiers can be extracted from a plurality of data packets transmitted between the first client and the server, and the extracted image data identifiers are sent to the second client, so that the second client can obtain corresponding image data according to the received image data identifiers. Because the image data identification is extracted from the data packet transmitted between the first client and the server, the accuracy is relatively high, the condition that the identification cannot be realized does not exist, the success rate of acquiring the image data by the second client is ensured, and the diagnosis efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a system architecture diagram for image data acquisition according to an embodiment of the present disclosure;
fig. 2 is a flowchart of an image data acquiring method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an image data acquiring apparatus according to an embodiment of the present disclosure;
fig. 4 is a block diagram of an image data acquiring terminal according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before explaining the embodiments of the present application in detail, an application scenario related to the embodiments of the present application will be described.
When a hospital is examined at present, the diagnostic imaging equipment can be used for acquiring image data of an examined part, then, a terminal can acquire the image data from a server of a medical image information system through a client of the medical image information system and display the image data, and the client of the artificial intelligent auxiliary diagnostic system can display the image data through an identifier of the image data displayed by the client of the current medical image information system and perform auxiliary analysis. The image data acquisition method provided by the embodiment of the application can be used for displaying the image data on the client of the artificial intelligent auxiliary diagnosis system and performing auxiliary analysis on the image data by acquiring the identification of the image data in the scene.
Next, a system architecture related to the image data acquisition method provided in the embodiment of the present application is described.
Fig. 1 is a system architecture diagram according to an embodiment of the present disclosure. As shown in fig. 1, the system 100 includes a server 101 and a terminal 102. The server 101 and the terminal 102 are connected by wireless or wired means to communicate with each other.
The terminal 102 is installed with a first client 1021 and a second client 1022, where the first client 1021 is a client of a medical image information system, and the second client 1022 is a client of an artificial intelligence auxiliary diagnosis system, and since the medical image information system and the artificial intelligence auxiliary diagnosis system are different systems, image data displayed by the first client 1021 cannot be directly displayed in the second client 1022.
The terminal 102 further includes a network analysis module 1023, and optionally, the network analysis module 1023 may be an application installed in the terminal 102, a code that can be executed and stored in the terminal 102, or a code that is obtained by the terminal 102 from external hardware.
The terminal 102 sends a video data acquisition request to the server 101 through the first client 1021, receives a video data acquisition response returned by the server 101, and then obtains the requested video data from the video data acquisition response, and displays the video data through the first client 1021.
In this process, the terminal 102 may obtain the image data identifier of the image data requested by the first client 1021 through the network analysis module 1023, and send the image data identifier to the second client 1022, so that the second client 1022 obtains the image data from the server 101 through the image data identifier and displays the image data.
The image data identifier may be any identifier capable of uniquely identifying the image data, for example, the image data identifier may be an address of the image data, an id of the image data, or the like.
The server 101 is a server of the medical image information system, and may receive an image data acquisition request sent by the terminal 102, where the image acquisition request carries an identifier of image data requested by the terminal 102, and the server 101 may acquire the image data requested by the terminal according to the image data identifier, and then the server 101 may return an image data acquisition response carrying the requested image data to the terminal 102.
In the embodiment of the present application, the server 101 may be a server or a server cluster. The terminal 102 may be a tablet computer, a desktop computer, or other devices, which is not limited in this embodiment.
Next, an image data acquisition method provided in an embodiment of the present application is described.
Fig. 2 is a flowchart of an image data acquiring method according to an embodiment of the present disclosure. The method may be applied to the network analysis module in fig. 1. As shown in fig. 2, the method comprises the steps of:
step 201: a plurality of data packets transmitted between a first client and a server are acquired.
The first client is a client of the medical image information system, and the server is a server of the medical image information system.
When multiple data packets transmitted between the first client and the server are acquired, the network analysis module may acquire the data packets transmitted between the terminal and the server of the first client from all the data packets transmitted and received by the terminal where the first client is located according to the source address and the destination address of the data packets.
For example, the network analysis module may obtain all data packets transmitted from the first client to the medical image information system server by using the address of the terminal where the first client is located as a source address and the address of the medical image information system server as a destination address. Of course, all the data packets transmitted from the medical image information system server to the first client may be acquired by using the address of the medical image information system server as the source address and the address of the first client terminal as the destination address. The network analysis module may obtain all data packets transmitted between the terminal and the server according to an address of the terminal where the first client is located and an address of the server corresponding to the first client.
Optionally, in a possible case, the network analysis module may also obtain, by setting the monitored port, a plurality of data packets transmitted between the first client and the server. The ports are the entrance and exit of the terminal for communication with the outside, and the ports can be classified into 3 types according to port numbers: recognized ports, registered ports, dynamic and/or private ports.
In this embodiment, the network analysis module may monitor all ports of the terminal where the first client is located, and obtain the data packet transmitted through each port. That is, the network analysis module may obtain all data packets sent and received by the terminal. Then, the network analysis module may screen all the data packets according to the obtained source addresses and destination addresses of all the data packets, reserve any one of the source addresses or the destination addresses as the data packet of the address of the server corresponding to the first client, and delete the rest of the data packets. In this way, the data packets transmitted between the terminal and the server corresponding to the first client can be acquired from all the acquired data packets.
Or, when different protocols are used for different services, the data packets are transmitted through different ports, and each service has a corresponding default port, so that the default port used when the protocol is used for transmitting the data packets can be used as the port for monitoring according to the protocol for transmitting the data packets between the terminal and the server of the first client. Therefore, the problem of excessive data packet quantity obtained by monitoring all ports can be prevented, and the workload of data analysis is reduced. In this case, the network analysis module may monitor whether the port has a packet transmission in real time. When the terminal sends or receives the data packet through the port, the network analysis module can capture the sent or received data packet, and at the moment, the captured data packet is the data packet transmitted between the terminal and the server.
It should be noted that the first client may obtain the image data by sending an HTTP (HTTP-Hypertext transfer protocol) request to the server, and accordingly, the server may feed back the image data to the first client by sending an HTTP response. That is, the data packet transmitted between the first client and the server for acquiring the image data may be an HTTP data packet. Since the default port used when the HTTP packet is transmitted by using the HTTP protocol is usually 80 ports, the 80 ports may be used as the monitored ports, and the HTTP packet transmitted through the 80 ports may be used as the packet transmitted between the terminal and the server of the first client.
Alternatively, in some cases, the terminal may not use the default port for transmission, but reopen a new port that is not a recognized port, and use the port as the port for transmitting HTTP packets. Based on this, the terminal may monitor the port, and obtain the data packet transmitted through the port as the data packet transmitted between the terminal and the server of the first client. In this case, when accessing the web page through the address, the port number of the port needs to be written into the port field of the address, so that the terminal does not use the default port to transmit the HTTP packet, but uses the port to transmit the HTTP packet.
In addition, in this embodiment of the application, the first client may apply for registering an account and a password with the server, and the server may store the account and the password and generate a corresponding relationship between the account and the password. When the first client acquires the image data from the server, the first client can log in the server through the account and the password.
Optionally, the network analysis module may store an organization format of information related to the image data when the first client obtains the image data. Therefore, the network analysis module can more accurately and quickly acquire the image data identifier from the data packet sent by the first client according to the information format.
Further, the network analysis module may correspond to a database, and the database may store the image data identifier acquired by the network analysis module. It should be noted that the database has a storage upper limit value, that is, the maximum number of the image data identifiers that can be stored.
Step 202: one or more image data identifiers are extracted from the plurality of data packets.
After the multiple data packets are obtained, the network analysis module can screen the multiple data packets to obtain one or more target data packets, wherein the one or more target data packets are data packets sent when the first client requests data from the server; one or more image data identifiers are determined based on the one or more destination data packets.
As can be known from the introduction of step 201, the obtained multiple data packets may be multiple HTTP data packets transmitted between the first client and the server, where the multiple HTTP data packets include multiple request data packets for obtaining image data sent by the first client to the server, and multiple response data packets for returning image data sent by the server to the first client. Based on this, the network analysis module may obtain one or more request packets from the plurality of packets, and use the obtained one or more request packets as one or more target packets.
In one possible implementation manner, the network analysis module may obtain one or more request packets from the plurality of packets according to packet information of the plurality of packets. In this case, the network analysis module may obtain packet information of each of the plurality of packets, and obtain a packet type of each of the plurality of packets from the packet information of each of the plurality of packets; one or more data packets with the data packet type of a first type are used as one or more target data packets, and the first type is the type of the data packets sent when the first client requests data from the server.
It should be noted that, the packet information of a plurality of packets is usually organized in a list form, each line of information is packet information of one packet, and each line of packet information includes a number, a timestamp, a source address, a destination address, a protocol, a length, a packet type, a storage path, and a protocol version of a packet corresponding to the line of packet information.
The serial number is a sequencing position of the corresponding data packet in the obtained multiple data packets, the timestamp is a time difference between a moment when the network analysis module starts to capture the data packet and a moment when the corresponding data packet is obtained, the source address is an address of a terminal where a first client sending the corresponding data packet is located, the destination address is an address of a server corresponding to the first client receiving the corresponding data packet, the protocol is a protocol used when the corresponding data packet is sent, the protocol can be an HTTP (hyper text transport protocol) protocol, the type of the data packet is a method used for obtaining image data, and the storage path is a position where the image data is stored in the server.
Illustratively, if a certain row of packet information in the list is 20.000038192.168.10.202192.168.10.176 HTTP 553GET/Images/img _ sid/save2.png HTTP/1.1, the number of the packet corresponding to the row of packet information is 2, the packet is obtained 0.000038 seconds after the start of obtaining the packet, the source address of the packet is 192.168.10.202, the destination address is 192.168.10.176, the protocol used for sending the packet is the HTTP protocol, and the length of the packet is 553. And the packet type of the packet is GET, the storage path is/Images/img _ sid/save2.png, and the HTTP protocol used for transmission is 1.1 version.
After obtaining the packet information of each packet, the network analysis module may extract the packet type from the packet information of each packet.
It should be noted that the packet types of the commonly used HTTP request packet include GET, POST, and HEAD, and the GET type is used to request to obtain the image data identified by a Uniform Resource Locator (URL); the POST type is used for adding new data after the image data identified by the URL; the HEAD type is used for a response header requesting acquisition of the image data identified by the URL. And only one type of data packet is used when the data packet is transmitted between the same first client and the server.
Since the acquired plurality of data packets may include the various types of data packets described above, the network analysis module may acquire the data packet type of each data packet from the data packet information of each data packet in the plurality of captured data packets, count the number of data packets of each data packet type in the plurality of data packets currently acquired, and set the data packet type with the largest number of data packets as the first type. Then, the data packet whose data packet type is not the first type may be deleted from the obtained plurality of data packets, only the data packet whose data packet type is the first type may be obtained, and the data packet whose data packet type is the first type may be used as the target data packet.
By screening the plurality of data packets through the method, the number of the obtained target data packets can be further reduced, the calculation resources occupied when the data packets are analyzed are reduced, the storage resources of the terminal occupied by the data packets are also reduced, and therefore the calculation and storage expenses of the terminal can be reduced.
After one or more target data packets are acquired, the network analysis module can extract application layer information of each target data packet from each target data packet; segmenting the application layer information of each target data packet to obtain a plurality of data slices of each target data packet; acquiring data slices of which the slice types are a second type and a third type from a plurality of data slices of each target data packet; one or more image data identifiers are determined from the acquired data slices.
It should be noted that the data packet is encapsulated according to five network models, namely, a physical layer, a data link layer, a network layer, a transport layer, and an application layer. The last layer of encapsulation is application layer encapsulation, and the image data identifier is located in the application layer information of the application layer encapsulation, so that the application layer information can be extracted from each target data packet.
In the embodiment of the present application, the application layer information includes four parts, namely a request line, a request header, an empty line and a request body of the HTTP request. The request line contains the type of data packet, memory path and protocol version requested to be used, and includes a carriage return line feed symbol at the end of the request line. The request header may include additional information that the first client sends a request to the server and information of the first client itself, and the information included in the request header is variable according to different situations. The request body includes contents that differ according to the type of packet.
Illustratively, the Request behavior of the HTTP Request is Method Request-URL HTTP-VersionCRLF, the Method indicates the type of packet used by the Request, the Request-URL indicates a storage path, the HTTP-Version indicates the protocol Version of the Request, and the CRLF indicates carriage return and line feed. Suppose that the request behavior of an HTTP request is GET/Images/img _ sid/save2.png HTTP/1.1(CRLF), the packet type used by the request is GET, the storage path is/Images/img _ sid/save2.png, and the protocol version used is HTTP 1.1.
The request header may include fields such as Accept, User-Agent, Host, Content-Length, Accept-Encoding, and Accept-Langeuge. The Accept field indicates a file format of the multipurpose internet mail extension which can be received by the browser used by the first client; the User-Agent field refers to the type of browser that the first client may use; the Host field refers to the domain name and port number of the requesting server; the Content-Length field refers to the Length of the request body; the Accept-Encoding field indicates the image data compression Encoding type returned by the server which can be supported by the browser used by the first client; the Accept-Langeuge field refers to the language class that the browser used by the first client may Accept.
Illustratively, the request header of an HTTP request is:
Host:localhost:8030
Content-Length:16
User-Agent:Mozilla/5.0(Windows NT 6.1;Win64;x64)AppleWebKit/537.36(KHTML,like Gecko)Chrome/66.0.3359.181Safari/537.36
Accept:image/gif,image/x-xbitmap,image/jepg,image/pjpeg,*/*
Accept-Encoding:gzip,deflate,br
Accept-Language:zh-CN,zh-EN
wherein, the HOST field indicates that the domain name of the requested server is localhost and the port number is 8030; the Content-Length field indicates that the Length of the request body is 16; the User-Agent field indicates that the type of browser used by the first client can be Mozilla, AppleWebKit, Chrome, and Safari; the Accept field indicates that the formats of images which can be received by the browser used by the first client are gif, x-xbitmap, jpeg and pjpeg; the Accept-Encoding field indicates that the browser used by the first client can support the compression Encoding types of gzip, deflate and br; the Accept-Language field indicates that the languages that the browser used by the first client can Accept are Chinese and English.
The content contained in the request body differs according to the type of the packet. When the type of the data packet is GET, the request body is empty, and the data sent by the first client to the server is written after the storage address of the request line and is sent to the server as a part of the storage address. When the type of the data packet is POST, the request body includes data sent by the first client to the server, and there is no limitation on the amount of the data.
After the application layer information of each target data packet is extracted from each target data packet, the application layer information of each target data packet can be segmented. The method for segmenting the application layer information of each target data packet may be to segment each line of the application layer information according to a line break, that is, to use data corresponding to each line of the application layer information as a data slice. Since data is represented using binary or hexadecimal in the terminal, the line break may be represented as "0 d0 a" in the data. And then, segmenting the application layer information of each target data packet according to the '0 d0 a' to obtain a plurality of data slices of each target data packet.
The data slices of each target data packet are classified respectively, the classification can be performed according to the content of each field in the four parts included in the application layer information, and the type of each data slice, that is, the field to which the data slice belongs, is marked according to the field corresponding to each data slice. Illustratively, the type of data slice may be request line, HOST, Content-Length, and so on.
After obtaining the plurality of data slices of each target data packet, the network analysis module may obtain data slices of the second type and the third type from the plurality of data slices of each target data packet, and further obtain the image data identifier corresponding to each target data packet according to the data slices of the second type and the third type.
The second type is a type marked by a field containing a storage path of the image data, and the third type is a type marked by a field containing server information. As can be seen from the foregoing, the data slice of the request line type includes the packet type, the storage path, and the protocol version requested to be used, and based on this, in this embodiment of the present application, the second type may be referred to as a request line type; in addition, since the HOST type data slice includes the domain name and the port number of the requested server, the third type may refer to the HOST type in the embodiment of the present application.
After the second type and the third type of data slices are obtained, because the second type of data slices contain storage paths and the third type of data slices contain server information, a first address field can be extracted from the second type of data slices of the multiple data slices of each target data packet, and the first address field is used for indicating the storage paths of the image data corresponding to the corresponding data packets; extracting a second address field from a data slice with a third type slice type in the plurality of data slices of each target data packet, wherein the second address field is address information of the server; and then, generating an image data identifier corresponding to each target data packet according to the first address field and the second address field extracted from each target data packet. The address information of the server may refer to a domain name and a port number of the server.
In this embodiment of the application, since the first client requests the server to acquire the image data through the address of the image data, and the address in the network has a predetermined fixed composition format, where the fixed composition format includes a domain name, a port number, and a storage path, the composition format of the domain name, the port number, and the storage path may be used as a template, and compared and matched with the acquired data slices of the second type and the third type, so that the storage path of the image data may be identified in the data slices of the second type, the storage path of the identified image data may be acquired, and the storage path of the image data may be used as the first address segment. Similarly, the domain name and the port number may be identified in the third type of data slice, the identified domain name and port number may be obtained, and the domain name and port number may be used as the second address field.
After the first address field and the second address field are acquired, according to the fixed composition format of the address specified in the foregoing network: the domain name, the port number and the storage path are used for combining the obtained first address field and the second address field, namely, the first address field is placed behind the second address field, so that the image data identifier corresponding to each target data packet can be obtained. At this time, the image data identifier is a URL for identifying the image data.
In order to enable the network analysis module to identify the storage path, the domain name and the port number of the image data, the network analysis module may include a neural network trained according to a plurality of addresses with fixed composition formats. The network analysis module can identify a storage path of the image data from the second type of data slice and identify a domain name and a port number of the server from the third type of slice data through the neural network. And then, the network analysis module combines the storage path, the domain name and the port number of the image data to obtain an image data identifier corresponding to each target data packet.
Optionally, the storage path portion of the image data in the second type of data slice may further include a parameter, which is an ID ((Identity) of the image data to be acquired.
In this embodiment of the present application, any information that can identify only one image data in each target data packet may also be obtained as an image data identifier, which is not limited herein.
It should be noted that, the data packets are transmitted and stored in binary or hexadecimal form in the terminal or the server. Therefore, when the image data identifier is obtained, the data packet can be encoded, binary or hexadecimal data in the data packet can be visualized into information in a character form, and the image data identifier can be obtained from the visualized information. The method for Encoding the data packet may be any one of the Encoding manners supported in the aforementioned Accept-Encoding field, and is not limited herein.
In the above embodiments, an implementation process of using one or more request data packets as target data packets and further obtaining image data identifiers from the target data packets is mainly described. In another possible implementation manner, one or more response packets may also be used as the target packet, and one or more image data identifiers may be determined therefrom. The application layer information of the response data packet comprises four parts, namely a status line, a response head, a null line and a response body. The state line starts with the version of the HTTP protocol used by the server, is separated by spaces, and is followed by the response state code sent back by the server and the text description of the state code; the response head is similar to the request head, and adds some additional information to the response data packet; and responding to the text as a processing result, and the browser used by the first client can take out the data in the text content to generate corresponding video data. Since the response text includes the related information of the image data, the network analysis module may extract one or more image data identifiers from the response text of the response data packet, which is not described herein again in this embodiment of the present application.
Optionally, after obtaining the image data identifier, the image data identifier may be stored in the database corresponding to the network analysis module. When the number of the stored image data identifications is equal to the storage upper limit value of the data block, if image data identifications to be stored are available subsequently, the previously stored image data identifications can be deleted in sequence according to the storage sequence of the image data identifications, and then the newly acquired image data identifications are stored in the database.
Step 203: and sending the extracted one or more image data identifications to the second client so that the second client acquires corresponding image data according to the one or more image data identifications.
The second client is a client of the artificial intelligence auxiliary diagnosis system.
In some embodiments, the first client and the second client are two independent clients, and at this time, when the second client needs to perform auxiliary analysis on the image data currently displayed by the first client, the second client may be opened, the network analysis module may send the acquired image data identifier of the image data currently displayed by the first client to the second client, and the second client may display the received image data identifier newly obtained by the network analysis module, acquire the image data according to the image data identifier, and further display the image data, and perform auxiliary analysis on the image data.
Illustratively, if the image data identifier received by the second client is the address of the image data, the second client may obtain the image data from the server through the image data identifier; if the image data received by the second client is identified as the id of the image data, the image data can be downloaded from the first client through the download interface provided by the first client according to the id of the image data.
In other embodiments, when the first client is an internet client, the first client may display the image data through a browser, and at this time, the second client may be packaged in the browser as a plug-in. When the image data currently displayed by the first client needs to be subjected to auxiliary analysis, a second client packaged in a browser in a plug-in form can be called, at this time, the second client can directly acquire the image data identifier recently acquired by the network analysis module, and determine the identifier of the image data currently displayed by the first client from the acquired image data identifier, and then display the image data currently displayed by the first client according to the influence data identifier, and perform auxiliary analysis on the image data.
In a possible situation, when the first client opens a page through the browser to display one image data, the second client may be called by clicking a function button corresponding to the second client in the page, and the second client may obtain an identifier of the image data displayed in the current page, and further obtain corresponding image data through the identifier of the image data to perform auxiliary analysis.
In the embodiment of the application, one or more image data identifiers can be extracted from a plurality of data packets transmitted between the first client and the server, and the extracted image data identifiers are sent to the second client, so that the second client can obtain corresponding image data according to the received image data identifiers. Because the image data identification is extracted from the data packet transmitted between the first client and the server, the accuracy is relatively high, the condition that the identification cannot be realized does not exist, the success rate of acquiring the image data by the second client is ensured, and the diagnosis efficiency is improved.
Referring to fig. 3, an embodiment of the present application provides an image data acquiring apparatus 300, where the image data acquiring apparatus may be applied in a terminal, and the apparatus 300 includes:
an obtaining module 301, configured to obtain a plurality of data packets transmitted between a first client and a server, where the first client is a client of a medical image information system, and the server is a server of the medical image information system;
an extracting module 302, configured to extract one or more image data identifiers from a plurality of data packets;
the sending module 303 is configured to send the one or more image data identifiers to a second client, so that the second client obtains corresponding image data according to the one or more image data identifiers, where the second client is a client of an artificial intelligence auxiliary diagnosis system.
Optionally, the extracting module 302 includes:
the screening unit is used for screening the plurality of data packets to obtain one or more target data packets, and the one or more target data packets are data packets sent when the first client requests data from the server;
and the determining unit is used for determining one or more image data identifications according to one or more target data packets.
Optionally, the screening unit is specifically configured to:
acquiring the data packet type of each data packet from the data packet information of each data packet;
and taking one or more data packets with the data packet type of a first type as one or more target data packets, wherein the first type is the type of the data packet sent when the first client requests data from the server.
Optionally, the determining unit includes:
an extraction subunit, configured to extract application layer information of each target packet from each target packet;
the segmentation subunit is used for segmenting the application layer information of each target data packet to obtain a plurality of data slices of each target data packet;
an acquisition subunit, configured to acquire, from the plurality of data slices of each target data packet, data slices of which slice types are a second type and a third type;
and the determining subunit is used for determining one or more image data identifications according to the acquired data slices.
Optionally, the determining subunit is specifically configured to:
extracting a first address field from a data slice with a second type of slice type in a plurality of data slices of each target data packet, wherein the first address field is used for indicating a storage path of image data corresponding to the corresponding data packet;
extracting a second address field from a data slice with a third type slice type in the plurality of data slices of each target data packet, wherein the second address field is address information of the server;
and generating an image data identifier corresponding to each target data packet according to the first address field and the second address field extracted from each target data packet.
In the embodiment of the application, one or more image data identifiers can be extracted from a plurality of data packets transmitted between the first client and the server, and the extracted image data identifiers are sent to the second client, so that the second client can obtain corresponding image data according to the received image data identifiers. Because the image data identification is extracted from the data packet transmitted between the first client and the server, the accuracy is relatively high, the condition that the identification cannot be realized does not exist, the success rate of acquiring the image data by the second client is ensured, and the diagnosis efficiency is improved.
It should be noted that: in the image data acquiring apparatus provided in the above embodiment, when acquiring image data, only the division of the above functional modules is taken as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the embodiments of the image data obtaining method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the embodiments of the methods for details, which are not described herein again.
Fig. 4 is a block diagram illustrating an image data acquiring terminal 400 according to an exemplary embodiment. The terminal 400 may be a notebook computer, a desktop computer, or the like.
Generally, the terminal 400 includes: a processor 401 and a memory 402.
In some embodiments, the terminal 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402 and peripheral interface 403 may be connected by buses or signal lines. Each peripheral may be connected to the peripheral interface 403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 404, a display screen 405, a camera assembly 406, an audio circuit 407, a positioning assembly 408, and a power supply 409.
The peripheral interface 403 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 401 and the memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 401, the memory 402 and the peripheral interface 403 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, various generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 405 is a touch display screen, the display screen 405 also has the ability to capture touch signals on or over the surface of the display screen 405. The touch signal may be input to the processor 401 as a control signal for processing. At this point, the display screen 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 405 may be one, providing the front panel of the terminal 400; in other embodiments, the display screen 405 may be at least two, respectively disposed on different surfaces of the terminal 400 or in a folded design; in still other embodiments, the display 405 may be a flexible display disposed on a curved surface or a folded surface of the terminal 400. Even further, the display screen 405 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display screen 405 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials. It should be noted that, in the embodiment of the present application, when the terminal 400 is a landscape terminal, the aspect ratio of the display screen of the terminal 400 is greater than 1, for example, the aspect ratio of the display screen of the terminal 400 may be 16:9 or 4: 3. When the terminal 400 is a portrait terminal, the aspect ratio of the display of the terminal 400 is less than 1, for example, the aspect ratio of the display of the terminal 400 may be 9:18 or 3:4, etc.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 400. The microphone may also be an array microphone or an omni-directional acquisition microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The loudspeaker can be a traditional film loudspeaker and can also be a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 407 may also include a headphone jack.
The positioning component 408 is used to locate the current geographic position of the terminal 400 for navigation or LBS (Location Based Service). The Positioning component 408 may be a Positioning component based on the GPS (Global Positioning System) of the united states, the beidou System of china, or the galileo System of the european union.
The power supply 409 is used to supply power to the various components in the terminal 400. The power source 409 may be alternating current, direct current, disposable or rechargeable. When the power source 409 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery can also be used to support fast charge technology.
In some embodiments, the terminal 400 also includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 400. For example, the acceleration sensor 411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 401 may control the display screen 405 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 411. The acceleration sensor 411 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the terminal 400, and the gyro sensor 412 may cooperate with the acceleration sensor 411 to acquire a 3D motion of the terminal 400 by the user. From the data collected by the gyro sensor 412, the processor 401 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 413 may be disposed on a side bezel of the terminal 400 and/or on a lower layer of the display screen 405. When the pressure sensor 413 is disposed at a side frame of the terminal 400, a user's holding signal of the terminal 400 may be detected, and left-right hand recognition or shortcut operation may be performed by the processor 401 according to the holding signal collected by the pressure sensor 413. When the pressure sensor 413 is arranged at the lower layer of the display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 414 is used for collecting a fingerprint of the user, and the processor 401 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 414 may be disposed on the front, back, or side of the terminal 400. When a physical button or vendor Logo is provided on the terminal 400, the fingerprint sensor 414 may be integrated with the physical button or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, processor 401 may control the display brightness of display screen 405 based on the ambient light intensity collected by optical sensor 415. Specifically, when the ambient light intensity is high, the display brightness of the display screen 405 is increased; when the ambient light intensity is low, the display brightness of the display screen 405 is reduced. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
A proximity sensor 416, also known as a distance sensor, is typically disposed on the front panel of the terminal 400. The proximity sensor 416 is used to collect the distance between the user and the front surface of the terminal 400. In one embodiment, when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually decreases, the processor 401 controls the display screen 405 to switch from the bright screen state to the dark screen state; when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 is gradually increased, the processor 401 controls the display screen 405 to switch from the breath-screen state to the bright-screen state.
That is, not only is the present application embodiment provide a terminal including a processor and a memory for storing executable instructions of the processor, wherein the processor is configured to execute the image data acquiring method shown in fig. 3, but also the present application embodiment provides a computer readable storage medium having a computer program stored therein, and the computer program can implement the image data acquiring method shown in fig. 3 when being executed by the processor.
The embodiment of the present application further provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the image data acquiring method provided in the embodiment shown in fig. 3.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (9)
1. The image data acquisition method is applied to a network analysis module of a terminal, wherein a first client and a second client are installed on the terminal, and the method comprises the following steps:
acquiring a plurality of data packets transmitted between the first client and a server, wherein the first client is a client of a medical image information system, and the server is a server of the medical image information system;
screening the plurality of data packets to obtain one or more target data packets, wherein the one or more target data packets are data packets sent when the first client requests data from the server;
determining the one or more image data identifiers according to the one or more target data packets;
and sending the one or more image data identifiers to the second client so that the second client can obtain corresponding image data according to the one or more image data identifiers, wherein the second client is a client of an artificial intelligence auxiliary diagnosis system.
2. The method of claim 1, wherein the filtering the plurality of packets to obtain one or more target packets comprises:
acquiring the data packet type of each data packet from the data packet information of each data packet;
and taking one or more data packets with the data packet type of a first type as the one or more target data packets, wherein the first type is the type of the data packet sent when the first client requests data from the server.
3. The method of claim 1, wherein said determining the one or more image data identifiers from the one or more destination data packets comprises:
extracting application layer information of each target data packet from each target data packet;
segmenting the application layer information of each target data packet to obtain a plurality of data slices of each target data packet;
acquiring data slices of which the slice types are a second type and a third type from a plurality of data slices of each target data packet, wherein the second type is a type marked by a field containing a storage path of the image data, and the third type is a type marked by a field containing server information;
determining the one or more image data identifiers from the acquired data slices.
4. The method of claim 3, wherein determining the one or more image data identifications from the acquired data slices comprises:
extracting a first address field from the data slice with the second type in the slice type in the plurality of data slices of each target data packet, wherein the first address field is used for indicating a storage path of image data corresponding to the corresponding data packet;
extracting a second address field from the data slice with the slice type of the third type in the plurality of data slices of each target data packet, wherein the second address field is address information of the server;
and generating an image data identifier corresponding to each target data packet according to the first address field and the second address field extracted from each target data packet.
5. The utility model provides an image data acquisition device, its characterized in that is arranged in the network analysis module of terminal, install first client and second client on the terminal, the device includes:
the acquisition module is used for acquiring a plurality of data packets transmitted between the first client and the server, wherein the first client is a client of a medical image information system, and the server is a server of the medical image information system;
the extraction module is used for extracting one or more image data identifications from the plurality of data packets;
wherein the extraction module comprises:
the screening unit is used for screening the plurality of data packets to obtain one or more target data packets, and the one or more target data packets are data packets sent when the first client requests data from the server;
a determining unit, configured to determine the one or more image data identifiers according to the one or more target data packets;
and the sending module is used for sending the one or more image data identifiers to the second client so that the second client can obtain corresponding image data according to the one or more image data identifiers, and the second client is a client of an artificial intelligence auxiliary diagnosis system.
6. The apparatus according to claim 5, wherein the screening unit is specifically configured to:
acquiring the data packet type of each data packet from the data packet information of each data packet;
and taking one or more data packets with the data packet type of a first type as the one or more target data packets, wherein the first type is the type of the data packet sent when the first client requests data from the server.
7. The apparatus of claim 5, wherein the determining unit comprises:
an extraction subunit, configured to extract application layer information of each target packet from each target packet;
the segmentation subunit is used for segmenting the application layer information of each target data packet to obtain a plurality of data slices of each target data packet;
an obtaining subunit, configured to obtain, from the multiple data slices of each target data packet, data slices of a second type and a third type, where the second type is a type marked by a field including a storage path of the image data, and the third type is a type marked by a field including server information;
and the determining subunit is used for determining the one or more image data identifications according to the acquired data slices.
8. The apparatus of claim 7, wherein the determining subunit is specifically configured to:
extracting a first address field from the data slice with the second type in the slice type in the plurality of data slices of each target data packet, wherein the first address field is used for indicating a storage path of image data corresponding to the corresponding data packet;
extracting a second address field from the data slice with the slice type of the third type in the plurality of data slices of each target data packet, wherein the second address field is address information of the server;
and generating an image data identifier corresponding to each target data packet according to the first address field and the second address field extracted from each target data packet.
9. A computer-readable storage medium, characterized in that a computer program is stored in the storage medium, which computer program, when being executed by a processor, carries out the steps of the method of one of the claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910974450.3A CN110650210B (en) | 2019-10-14 | 2019-10-14 | Image data acquisition method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910974450.3A CN110650210B (en) | 2019-10-14 | 2019-10-14 | Image data acquisition method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110650210A CN110650210A (en) | 2020-01-03 |
CN110650210B true CN110650210B (en) | 2022-06-17 |
Family
ID=69012843
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910974450.3A Active CN110650210B (en) | 2019-10-14 | 2019-10-14 | Image data acquisition method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110650210B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111599482A (en) * | 2020-05-14 | 2020-08-28 | 青岛海信医疗设备股份有限公司 | Electronic case recommendation method and server |
CN111883233A (en) * | 2020-07-14 | 2020-11-03 | 上海商汤智能科技有限公司 | Image acquisition method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105681196A (en) * | 2016-01-12 | 2016-06-15 | 中国联合网络通信集团有限公司 | Service processing method, forwarder and classifier |
CN106845076A (en) * | 2016-12-20 | 2017-06-13 | 杭州联众医疗科技股份有限公司 | A kind of remote image diagnostic system |
CN108447549A (en) * | 2018-03-16 | 2018-08-24 | 沈阳东软医疗系统有限公司 | A kind of method and device of cooperation reading image |
CN109686424A (en) * | 2018-12-27 | 2019-04-26 | 管伟 | A kind of storage and exchange intelligent medical treatment system of medical image information |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7047235B2 (en) * | 2002-11-29 | 2006-05-16 | Agency For Science, Technology And Research | Method and apparatus for creating medical teaching files from image archives |
-
2019
- 2019-10-14 CN CN201910974450.3A patent/CN110650210B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105681196A (en) * | 2016-01-12 | 2016-06-15 | 中国联合网络通信集团有限公司 | Service processing method, forwarder and classifier |
CN106845076A (en) * | 2016-12-20 | 2017-06-13 | 杭州联众医疗科技股份有限公司 | A kind of remote image diagnostic system |
CN108447549A (en) * | 2018-03-16 | 2018-08-24 | 沈阳东软医疗系统有限公司 | A kind of method and device of cooperation reading image |
CN109686424A (en) * | 2018-12-27 | 2019-04-26 | 管伟 | A kind of storage and exchange intelligent medical treatment system of medical image information |
Also Published As
Publication number | Publication date |
---|---|
CN110650210A (en) | 2020-01-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110674022B (en) | Behavior data acquisition method and device and storage medium | |
CN108415705B (en) | Webpage generation method and device, storage medium and equipment | |
CN111092809B (en) | Method and device for pushing information in real time, computer equipment and storage medium | |
CN107968783B (en) | Traffic management method, device, terminal and computer readable storage medium | |
CN111327694B (en) | File uploading method and device, storage medium and electronic equipment | |
CN111159604A (en) | Picture resource loading method and device | |
CN110647881A (en) | Method, device, equipment and storage medium for determining card type corresponding to image | |
CN110650210B (en) | Image data acquisition method, device and storage medium | |
CN112749362A (en) | Control creating method, device, equipment and storage medium | |
CN113467663A (en) | Interface configuration method and device, computer equipment and medium | |
CN111586279B (en) | Method, device and equipment for determining shooting state and storage medium | |
CN110675473A (en) | Method, device, electronic equipment and medium for generating GIF dynamic graph | |
CN112910722B (en) | Network request testing method, device, terminal and storage medium | |
CN111008083B (en) | Page communication method and device, electronic equipment and storage medium | |
CN109413190B (en) | File acquisition method and device, electronic equipment and storage medium | |
CN111291287A (en) | Multimedia file uploading method and device and computer equipment | |
CN107800720B (en) | Hijacking reporting method, device, storage medium and equipment | |
CN111294320B (en) | Data conversion method and device | |
CN112162735A (en) | Control identification generation method and device, computer equipment and storage medium | |
CN114816600A (en) | Session message display method, device, terminal and storage medium | |
CN112699906A (en) | Method, device and storage medium for acquiring training data | |
CN112308104A (en) | Abnormity identification method and device and computer storage medium | |
CN111741040A (en) | Connection establishing method, address obtaining method, device, equipment and storage medium | |
CN112260845A (en) | Method and device for accelerating data transmission | |
CN109194966B (en) | Method and device for acquiring payload of SEI (solid electrolyte interface) message and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |