CN112330707A - Image processing method, image processing device, computer equipment and storage medium - Google Patents

Image processing method, image processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN112330707A
CN112330707A CN202011288760.9A CN202011288760A CN112330707A CN 112330707 A CN112330707 A CN 112330707A CN 202011288760 A CN202011288760 A CN 202011288760A CN 112330707 A CN112330707 A CN 112330707A
Authority
CN
China
Prior art keywords
image data
image
data
server
film reading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011288760.9A
Other languages
Chinese (zh)
Inventor
卢承磊
王余超
何思敏
庄之中
王永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan United Imaging Healthcare Co Ltd
Original Assignee
Wuhan United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan United Imaging Healthcare Co Ltd filed Critical Wuhan United Imaging Healthcare Co Ltd
Priority to CN202011288760.9A priority Critical patent/CN112330707A/en
Publication of CN112330707A publication Critical patent/CN112330707A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Abstract

The application relates to an image processing method, an image processing device, a computer device and a storage medium. The method comprises the following steps: receiving response data of film reading sent by a server; the film reading response data is obtained by the server through calculation of the image based on the film reading operation instruction of the user; the interpretation response data comprises image data and non-image data; merging the image data and the non-image data to obtain an updated image; and displaying the updated image on the film reading interface. By adopting the method, the updating times of the film reading can be reduced, the smoothness of film reading diagnosis is improved, and the misdiagnosis risk is reduced.

Description

Image processing method, image processing device, computer equipment and storage medium
Technical Field
The present application relates to the field of medical technology, and in particular, to an image processing method, an image processing apparatus, a computer device, and a storage medium.
Background
With the rise of internet and cloud computing, medical informatization is continuously upgraded, and more hospitals begin to use film reading clients to perform three-dimensional diagnosis film reading.
In the film reading client, a doctor can perform related operations such as rapid page turning, translation, rotation and the like on an image, the client can interact with the server according to a film reading instruction of the related operations, receive a calculation result obtained by the server through calculation of the image based on the film reading instruction, and timely refresh a film reading interface according to the calculation result, so that the doctor can perform film reading diagnosis through the client.
However, the calculation results include different types of data, and the times at which the client receives the different types of data are also different, so that the client refreshes the interpretation interface for multiple times at different receiving times, which causes jitter of the interpretation interface and affects the diagnosis accuracy of the doctor.
Disclosure of Invention
In view of the above, it is necessary to provide an image processing method, an apparatus, a computer device and a storage medium for solving the above technical problems.
An image processing method, the method comprising:
receiving response data of film reading sent by a server; the film reading response data is obtained by the server through calculation of the image based on the film reading operation instruction of the user; the interpretation response data comprises image data and non-image data;
merging the image data and the non-image data to obtain an updated image;
and displaying the updated image on the film reading interface.
In one embodiment, the non-image data includes coordinate information; merging the image data and the non-image data to obtain an updated image, comprising:
drawing an image to be displayed based on the image data;
determining the display position of the non-image data in the image to be displayed according to the coordinate information of the non-image data;
non-image data is added at the display position.
In one embodiment, the method further includes:
and if the interpretation response data contains the image data and does not contain the non-image data, drawing the image to be displayed according to the image data, and returning to the step of executing the interpretation response data sent by the receiving server until the interpretation result contains the non-image data.
In one embodiment, the reading response data carries a data type tag, and the method further includes:
data of which the data type tag matches the image data is determined as image data, and data of which the data type tag does not match is determined as non-image data.
In one embodiment, before receiving the interpretation response data sent by the server, the method further includes:
acquiring a film reading operation instruction of a user on an image displayed on a film reading interface; the reading operation instruction is at least one of page turning operation, translation operation, rotation operation and marking operation;
determining an operation label corresponding to the film reading operation instruction according to a preset instruction label corresponding relation; the instruction label corresponding relation comprises operation labels corresponding to the film reading operation instructions of various types;
sending the film reading operation instruction and the operation label corresponding to the film reading operation instruction to a server; the operation label is used for indicating the server to mark the reading response data corresponding to the reading operation instruction.
In one embodiment, the receiving of the interpretation response data sent by the server includes:
receiving interactive data sent by a server;
and screening the interactive data according to the operation tags, and determining the data carrying the operation tags as the response data of the film reading.
In one embodiment, the non-image data includes at least one of annotation information, device parameter information, and diagnosis information corresponding to the image.
An image processing apparatus, the apparatus comprising:
the receiving module is used for receiving the response data of the film reading sent by the server; the film reading response data is obtained by the server through calculation of the image based on the film reading operation instruction of the user; the interpretation response data comprises image data and non-image data;
the merging module is used for merging the image data and the non-image data to obtain an updated image;
and the display module is used for displaying the updated image on the film reading interface.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the image processing method when the processor executes the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned image processing method.
According to the image processing method, the image processing device, the computer equipment and the storage medium, after a user triggers the film reading operation, the computer equipment can send the film reading operation instruction of the user to the server, so that the server can calculate the image based on the film reading operation instruction of the user to obtain the response data of the film reading and send the response data to the computer equipment; after the computer equipment receives the interpretation response data sent by the server, under the condition that the interpretation response data comprises image data and non-image data, the image data and the non-image data are merged to obtain an updated image, and the updated image is displayed on an interpretation interface. After the reading response data contains the image data and the non-image data, the computer device merges the image data and the non-image data, so that the computer device can know whether the received reading response data contains different types of data obtained by the server through calculation, and further determine whether the reading response data is completely received; further, under the condition that the slide reading response data comprises image data and non-image data, the computer equipment can combine the image data and the non-image data to obtain an updated image, so that the updated image comprises complete information in the slide reading response data, one-time update display can be performed on the slide reading interface based on the updated image, interface jitter caused by updating the slide reading interface for multiple times after the non-image data and the image data are respectively detected is avoided, the slide reading fluency of a user is improved, and the misdiagnosis risk can be reduced.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of an image processing method;
FIG. 2 is a flow diagram illustrating a method for image processing according to one embodiment;
FIG. 3 is a flow chart illustrating an image processing method according to another embodiment;
FIG. 4 is a flowchart illustrating an image processing method according to another embodiment;
FIG. 5 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
FIG. 6 is a block diagram showing the construction of an image processing apparatus according to another embodiment;
FIG. 7 is a block diagram showing the construction of an image processing apparatus according to another embodiment;
FIG. 8 is a block diagram showing the construction of an image processing apparatus according to another embodiment;
FIG. 9 is a block diagram showing the construction of an image processing apparatus according to another embodiment;
FIG. 10 is a diagram showing an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The image processing method provided by the application can be applied to the application environment shown in fig. 1. The computer device 100 may be connected to the server 200 to obtain and display an image in the server 200, and the display device in the computer device may be one or a combination of several of a Cathode Ray Tube (CRT) output device, a liquid crystal output device (LCD), an organic light emitting output device (OLED), and a plasma output device. The server 200 may be connected to different types of medical devices 300, which are not limited to the nuclear magnetic resonance device shown in fig. 1. The server 200 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, there is provided an image processing method, which is described by taking the method as an example applied to the computer device in fig. 1, and includes:
s101, receiving reading response data sent by a server; the film reading response data is obtained by the server through calculation of the image based on the film reading operation instruction of the user; the radiograph interpretation response data includes image data and non-image data.
The user can read the three-dimensional diagnosis of the image through the computer equipment, and can also read the two-dimensional diagnosis. When a user conducts diagnosis interpretation through the computer equipment, the image can be subjected to interpretation operation through the input equipment of the computer equipment, and an interpretation operation instruction is generated. The reading operation instruction can be at least one of page turning operation, translation operation, rotation operation and marking operation. The marking operation may include marking a line on the image, adding a file mark, modifying a text mark, and the like. After the computer equipment acquires the film reading operation instruction, the film reading operation instruction can be sent to the server through a connection network between the computer equipment and the server, so that the server can perform corresponding calculation on the image browsed by the user according to the film reading operation instruction, and the film reading response data corresponding to the film reading operation instruction is obtained. The connection Network may be a Local Area Network (LAN), a Wide Area Network (WAN), a Public Network, a private Network, a Public Switched Telephone Network (PSTN), the internet, a wireless Network, a virtual Network, or any combination thereof.
The interpretation response data is obtained by the server computing the image based on the interpretation operation command of the user, and may include image data obtained by translating and rotating the image, and may include other information related to the image, such as non-image data of the subject corresponding to the image, such as the height, age, scanning part, and scanning computation result. Optionally, the non-image data may include at least one of annotation information, device parameter information, and diagnosis information corresponding to the image.
The server can send the data obtained based on the reading operation instruction to the computer device. Because the data obtained by the server contains image data and non-image data, the response time of different types of data in the server and the transmission time in the network may have differences, so that the computer device cannot simultaneously receive the image data and the non-image data sent by the server. For example, the server obtains corresponding image data and non-image data based on the film reading operation instruction, and sends the image data and the non-image data to the computer equipment; the computer device may receive the image data first or may receive the non-image data first.
Therefore, the computer device can determine whether the received data contains image data and non-image data, and if the data received by the computer device contains the image data and the non-image data, the computer device can consider that the received interpretation response data is complete, and can update the interpretation interface; if the data received by the computer device only contains image data or only contains non-image data, the computer device can consider that the currently received data is incomplete, and if the current data is updated and displayed, the slide reading interface is refreshed for many times, so that the slide reading fluency of a user is reduced.
Specifically, the computer device may compare the data amount of the interpretation response data obtained by the server with the data amount of the data received by the computer device, and determine whether the received data includes image data and non-image data; or, the computer device may determine whether image data and non-image data are received according to a data type tag carried in the received data; the above-mentioned determination method is not limited herein.
And S102, merging the image data and the non-image data to obtain an updated image.
If the slide response data received by the computer device contains image data and non-image data, the computer device can combine the received image data and non-image data, so that the computer device can display the combined updated image once.
Specifically, in the case that the non-image data can be displayed at any position in the update image, the computer device may bind the non-image data with any data in the image data, so that the user may select to click any position in the update image to obtain the non-image data; or, the display position of the non-image data corresponds to a partial pixel in an image, and the computer device may bind the non-image data with the data of the partial pixel. The merging processing method is not limited herein.
In addition, in the updated image, the image data and the non-image data may be located in the same layer or in different layers.
In another embodiment, the server may process the image based on the operation instruction, merge the image data and the non-image data, and send the merged image data and non-image data to the computer device, so that the computer device may receive the image data and the non-image data at the same time, thereby reducing the number of times of refreshing the viewing interface and avoiding frequent refreshing of the interface caused by multiple receptions.
And S103, displaying the updated image on the film reading interface.
On the basis of the steps, the computer device can display the updated image on the film reading interface. Specifically, the computer device may simultaneously display information of image data and non-image data; the information of the image data can also be displayed by default, and the non-image data is displayed after the user triggers the display of the non-image data, for example, the user can select the user information in the non-image data to be displayed through a mouse, and then the user information is displayed at the corresponding position of the slide reading interface. When the computer device displays the non-image data, the non-image data can be directly displayed on the image of the film reading interface, or can be displayed through a popup window, which is not limited herein.
After the user performs multiple film reading operations on the image on the film reading interface, for example, the user performs multiple operations such as translation and turning on the image in sequence. The computer device may sequentially transmit the above operation instructions to the server. After receiving the operation instruction, the server respectively generates image data and non-image data based on each film reading operation, and then sequentially sends the image data and the non-image data to the computer equipment. After the computer device refreshes the slide reading interface based on the previous slide reading operation instruction, the computer device may return to the step of executing the above S101-S103, continue to process the slide reading response data of other slide reading operations, and sequentially display the updated images corresponding to the other slide reading operation instructions.
According to the image processing method, after a user triggers a film reading operation, the computer equipment can send a film reading operation instruction of the user to the server, so that the server can calculate the image based on the film reading operation instruction of the user to obtain film reading response data and send the film reading response data to the computer equipment; after the computer equipment receives the interpretation response data sent by the server, under the condition that the interpretation response data comprises image data and non-image data, the image data and the non-image data are merged to obtain an updated image, and the updated image is displayed on an interpretation interface. After the reading response data contains the image data and the non-image data, the computer device merges the image data and the non-image data, so that the computer device can know whether the received reading response data contains different types of data obtained by the server through calculation, and further determine whether the reading response data is completely received; further, under the condition that the slide reading response data comprises image data and non-image data, the computer equipment can combine the image data and the non-image data to obtain an updated image, so that the updated image comprises complete information in the slide reading response data, one-time update display can be performed on the slide reading interface based on the updated image, interface jitter caused by updating the slide reading interface for multiple times after the non-image data and the image data are respectively detected is avoided, the slide reading fluency of a user is improved, and the misdiagnosis risk can be reduced.
Fig. 3 is a schematic flowchart of an image processing method in another embodiment, where this embodiment relates to an implementation manner of a computer device performing merging processing on image data and non-image data, and on the basis of the foregoing embodiment, as shown in fig. 3, the foregoing S102 includes:
s201, drawing an image to be displayed based on the image data.
The image to be displayed is an image drawn based on the image data, so that the image to be displayed can be displayed on the slide reading interface. The computer equipment can draw the image data after determining that the image data and the non-image data are received, so as to obtain the image to be displayed; or drawing can be started after receiving the image data, the image to be displayed is obtained and cached, and the image to be displayed in the cache region is read after the computer equipment further obtains the non-image data; the above-mentioned acquisition method is not limited herein.
S202, determining the display position of the non-image data in the image to be displayed according to the coordinate information of the non-image data.
The coordinate information of the non-image data can be the coordinate determined by the server and carried in the non-image data, or the coordinate information acquired by the computer equipment; the manner of acquiring the coordinate information corresponding to different types of non-image data may also be different.
In one implementation, the non-image data may include coordinate information corresponding to the non-image data; the coordinate information may be pixel coordinates corresponding to the non-image data and the associated pixel value in the image data, or may be display coordinates of the non-image data on the interpretation interface, which is not limited herein. Further, the computer device may determine a display position of the non-image data in the image to be displayed, based on the coordinate information of the non-image data.
In one implementation, the coordinate information of the non-image data may be predetermined position coordinates, such as four corner positions of the image to be displayed; the coordinate information of the non-image data may be coordinates associated with an object in the image to be displayed, and for example, the coordinate information of the non-image data may be a coordinate position not including a blood vessel, a bone, or an important tissue, at a position of the non-important information of the image to be displayed. The computer device may use an image recognition algorithm to obtain regions of non-important information in the image to be displayed, and then determine coordinate information of the non-image data according to the regions. The image recognition algorithm may adopt a threshold algorithm, a region growing algorithm, a machine learning algorithm, and the like, and is not described herein again.
S203, adding the non-image data to the display position.
On the basis of the above steps, the computer device may add non-image data at the corresponding display position, obtaining an updated image. For example, the non-image data is the annotation information corresponding to the scanning portion, the display position of the non-image data in the image to be displayed is the scanning portion, and the computer device may add the non-image data to the rotated or translated scanning portion to obtain the updated image.
According to the image processing method, the computer equipment merges the non-image data and the image data through the coordinate information carried in the non-image data, so that the non-image data can still be displayed at the position corresponding to the film reading operation instruction in the updated image after the user performs the film reading operation on the image, and the user can conveniently perform diagnosis and film reading; furthermore, after the computer device merges the image data and the non-image data, the display of the film reading interface is refreshed once, so that the display results corresponding to the film reading operation instructions can be displayed simultaneously, and the computer device is prevented from refreshing the display for multiple times.
In an embodiment, on the basis of the above embodiment, after the computer device determines whether the interpretation response data includes image data and non-image data, if the interpretation response data includes image data and does not include non-image data, the computer device draws an image to be displayed according to the image data, and returns to the step of receiving the interpretation response data sent by the server until the interpretation result includes non-image data. If the interpretation response data contains the non-image data and does not contain the image data, the computer device can continue to receive the interpretation response data sent by the server until the image data is received, and then merge the image data and the non-image data.
Under the condition that the computer equipment receives the image data and does not receive the non-image data, the image is drawn according to the image data, so that the computer equipment can directly combine the non-image data with the drawn image after receiving the non-image data, the time length of combination processing is reduced, the data display time delay in film reading is improved, and the smoothness of the film reading is further improved.
In another embodiment, the computer device determines whether the radiograph interpretation response data contains image data and non-image data, and based on the above embodiment, the radiograph interpretation response data sent by the server may carry a data type tag, so that the computer device can determine whether the radiograph interpretation response data contains image data and non-image data according to the data type tag. The computer device may determine data for which the data type tag matches the image data as image data and a non-match as non-image data. For example, a matching relationship between a data type tag and a data type may be stored in the computer device, for example, tag 1 corresponds to image data and tag 2 corresponds to non-image data. If the data type label carried by the first data in the film reading response data is matched with the image data, determining that the first data is the image data; and if the data type tag carried by the second data in the film reading response data is matched with the non-image data, determining that the second data is the non-image data.
After the tag matching, if the data type tag included in the interpretation response data includes both the data type tag corresponding to the image data and the data type tag corresponding to the non-image data, the computer device may consider that the interpretation response data includes the image data and the non-image data, and may perform merging processing on the image data and the non-image data.
According to the image processing method, the computer equipment identifies the image data and the non-image data in a label matching mode, and can quickly and accurately determine whether the reading response data contains the image data and the non-image data, so that the image data and the non-image data can be quickly combined and updated and displayed, and the fluency of diagnosis reading is improved.
Fig. 4 is a schematic flowchart of an image processing method in another embodiment, where this embodiment relates to an implementation manner of a computer device performing merging processing on image data and non-image data, and on the basis of the foregoing embodiment, as shown in fig. 4, before the foregoing S101, the method further includes:
s301, acquiring a reading operation instruction of a user on an image displayed on a reading interface; the reading operation instruction is at least one of page turning operation, translation operation, rotation operation and marking operation.
The computer equipment can determine whether the user carries out film reading operation by monitoring the triggering operation of the input device. The input device may be a mouse, a keyboard, a touch screen, etc. The reading operation instruction is any one of page turning operation, translation operation, rotation operation and marking operation. After the computer device monitors that the user executes the film reading operation, a corresponding film reading operation instruction can be generated.
S302, determining an operation label corresponding to the film reading operation instruction according to a preset instruction label corresponding relation; the instruction label corresponding relation comprises operation labels corresponding to the reading operation instructions of various types.
The computer equipment can mark the film reading operation instruction according to the type of the film reading operation, and obtain an operation label corresponding to the film reading operation instruction. The operation label may be a text label, an icon label, or a label of other number types, which is not limited herein. For example, operation tag a identifies a translation operation and operation tag B identifies a rotation operation.
S303, sending the film reading operation instruction and the operation label corresponding to the film reading operation instruction to a server; the operation label is used for indicating the server to mark the reading response data corresponding to the reading operation instruction.
Further, the computer device can send the reading operation instruction and the corresponding operation label to the server. After receiving the reading operation instruction, the server can calculate the image on the current interface according to the reading operation instruction to obtain reading response data, and identify the reading response data corresponding to the reading operation according to the operation label. For example, the image data and non-image data corresponding to the panning operation are labeled as label a.
In the interactive data sent by the server, the computer device may include other data, such as data left over in the previous operation or running state data of the server, in addition to the reading response data corresponding to the reading operation instruction. The computer equipment can receive the interactive data sent by the server; and then screening the interactive data according to the operation tags, and determining the data carrying the operation tags as the response data of the film reading.
According to the image processing method, the computer equipment can screen the data sent by the server through the operation tag, so that the computer equipment can accurately obtain the film reading response data corresponding to the film reading operation of the user, the interference of other data is eliminated, and the updated image displayed in the film reading interface is more accurate.
It should be understood that although the various steps in the flow charts of fig. 2-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-4 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 5, there is provided an image processing apparatus including:
the receiving module 10 is configured to receive reading response data sent by the server; the film reading response data is obtained by the server through calculation of the image based on the film reading operation instruction of the user; the interpretation response data comprises image data and non-image data;
a merging module 20, configured to merge the image data and the non-image data to obtain an updated image;
and the display module 30 is used for displaying the updated image on the film reading interface.
The image processing apparatus provided above can execute the above-mentioned embodiment of the image processing method, and the implementation principle and technical effect are similar, and are not described herein again.
In an embodiment, on the basis of the above embodiment, as shown in fig. 6, the merging module 20 includes:
a drawing unit 201 for drawing an image to be displayed based on image data;
a determining unit 202, configured to determine a display position of the non-image data in the image to be displayed according to the coordinate information of the non-image data;
an adding unit 203 for adding non-image data to the display position.
In an embodiment, on the basis of the above embodiment, the receiving module 10 is further configured to: and under the condition that the interpretation response data contains the image data and does not contain the non-image data, drawing the image to be displayed according to the image data, and returning to the step of executing the interpretation response data sent by the receiving server until the interpretation result contains the non-image data.
In an embodiment, on the basis of the above embodiment, the radiograph interpretation response data carries a data type tag, as shown in fig. 7, the apparatus further includes a determining module 40 specifically configured to: data of which the data type tag matches the image data is determined as image data, and data of which the data type tag does not match is determined as non-image data.
In an embodiment, on the basis of the above embodiment, as shown in fig. 8, the apparatus further includes a sending module 50, configured to: acquiring a film reading operation instruction of a user on an image displayed on a film reading interface; the reading operation instruction is at least one of page turning operation, translation operation, rotation operation and marking operation; determining an operation label corresponding to the film reading operation instruction according to a preset instruction label corresponding relation; the instruction label corresponding relation comprises operation labels corresponding to the film reading operation instructions of various types; sending the film reading operation instruction and the operation label corresponding to the film reading operation instruction to a server; the operation label is used for indicating the server to mark the reading response data corresponding to the reading operation instruction.
In an embodiment, on the basis of the above embodiment, as shown in fig. 9, the receiving module 10 includes:
a receiving unit 101, configured to receive interactive data sent by a server;
and the screening unit 102 is configured to screen the interactive data according to the operation tag, and determine the data carrying the operation tag as the interpretation response data.
In one embodiment, on the basis of the above embodiment, the non-image data includes at least one of annotation information, device parameter information and diagnosis information corresponding to the image.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
receiving response data of film reading sent by a server; the film reading response data is obtained by the server through calculation of the image based on the film reading operation instruction of the user; the interpretation response data comprises image data and non-image data;
merging the image data and the non-image data to obtain an updated image;
and displaying the updated image on the film reading interface.
In one embodiment, coordinate information is included in the non-image data; the processor, when executing the computer program, further performs the steps of: drawing an image to be displayed based on the image data; determining the display position of the non-image data in the image to be displayed according to the coordinate information of the non-image data; non-image data is added at the display position.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and if the interpretation response data contains the image data and does not contain the non-image data, drawing the image to be displayed according to the image data, and returning to the step of executing the interpretation response data sent by the receiving server until the interpretation result contains the non-image data.
In one embodiment, the interpretation response data carries a data type tag, and the processor executes the computer program to further implement the following steps: data of which the data type tag matches the image data is determined as image data, and data of which the data type tag does not match is determined as non-image data.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring a film reading operation instruction of a user on an image displayed on a film reading interface; the film reading operation instruction is any one of page turning operation, translation operation, rotation operation and marking operation; determining an operation label corresponding to the film reading operation instruction according to a preset instruction label corresponding relation; the instruction label corresponding relation comprises operation labels corresponding to the film reading operation instructions of various types; sending the film reading operation instruction and the operation label corresponding to the film reading operation instruction to a server; the operation label is used for indicating the server to mark the reading response data corresponding to the reading operation instruction.
In one embodiment, the processor, when executing the computer program, further performs the steps of: receiving interactive data sent by a server; and screening the interactive data according to the operation tags, and determining the data carrying the operation tags as the response data of the film reading.
In one embodiment, the non-image data includes at least one of annotation information, device parameter information, and diagnostic information corresponding to the image.
The implementation principle and technical effect of the computer device provided in this embodiment are similar to those of the method embodiments described above, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
receiving response data of film reading sent by a server; the film reading response data is obtained by the server through calculation of the image based on the film reading operation instruction of the user; the interpretation response data comprises image data and non-image data;
merging the image data and the non-image data to obtain an updated image;
and displaying the updated image on the film reading interface.
In one embodiment, coordinate information is included in the non-image data; the computer program when executed by the processor further realizes the steps of: drawing an image to be displayed based on the image data; determining the display position of the non-image data in the image to be displayed according to the coordinate information of the non-image data; non-image data is added at the display position.
In one embodiment, the computer program when executed by the processor further performs the steps of: and if the interpretation response data contains the image data and does not contain the non-image data, drawing the image to be displayed according to the image data, and returning to the step of executing the interpretation response data sent by the receiving server until the interpretation result contains the non-image data.
In one embodiment, the interpretation response data carries a data type tag, and the computer program when executed by the processor further implements the following steps: data of which the data type tag matches the image data is determined as image data, and data of which the data type tag does not match is determined as non-image data.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a film reading operation instruction of a user on an image displayed on a film reading interface; the film reading operation instruction is any one of page turning operation, translation operation, rotation operation and marking operation; determining an operation label corresponding to the film reading operation instruction according to a preset instruction label corresponding relation; the instruction label corresponding relation comprises operation labels corresponding to the film reading operation instructions of various types; sending the film reading operation instruction and the operation label corresponding to the film reading operation instruction to a server; the operation label is used for indicating the server to mark the reading response data corresponding to the reading operation instruction.
In one embodiment, the computer program when executed by the processor further performs the steps of: receiving interactive data sent by a server; and screening the interactive data according to the operation tags, and determining the data carrying the operation tags as the response data of the film reading.
The computer storage medium provided in this embodiment has similar implementation principles and technical effects to those of the above method embodiments, and is not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image processing method, characterized in that the method comprises:
receiving response data of film reading sent by a server; the film reading response data is obtained by the server through calculation of an image based on a film reading operation instruction of a user; the interpretation response data comprises image data and non-image data;
merging the image data and the non-image data to obtain an updated image;
and displaying the updated image on a film reading interface.
2. The method of claim 1, wherein said merging the image data and the non-image data comprises:
drawing an image to be displayed based on the image data;
determining the display position of the non-image data in the image to be displayed according to the coordinate information of the non-image data;
adding the non-image data to the display location.
3. The method of claim 1, further comprising:
and if the interpretation response data contains the image data and does not contain the non-image data, drawing an image to be displayed according to the image data, and returning to the step of executing the interpretation response data sent by the receiving server until the interpretation result contains the non-image data.
4. The method according to any one of claims 1-3, wherein the scoring response data carries a data type tag, the method further comprising:
data of which the data type tag matches the image data is determined as image data, and data of which the data type tag does not match is determined as non-image data.
5. The method according to any one of claims 1 to 3, wherein before receiving the scoring response data sent by the server, the method further comprises:
acquiring a film reading operation instruction of a user on the image displayed on the film reading interface; the reading operation instruction is at least one of page turning operation, translation operation, rotation operation and marking operation;
determining an operation label corresponding to the reading operation instruction according to a preset instruction label corresponding relation; the instruction label corresponding relation comprises operation labels corresponding to the film reading operation instructions of various types;
sending the film reading operation instruction and the operation label corresponding to the film reading operation instruction to the server; the operation label is used for indicating the server to mark the reading response data corresponding to the reading operation instruction.
6. The method of claim 5, wherein the receiving of the scoring response data sent by the server comprises:
receiving interactive data sent by the server;
and screening the interactive data according to the operation tags, and determining the data carrying the operation tags as the reading response data.
7. The method of any of claims 1-3, wherein the non-image data includes at least one of annotation information, device parameter information, and diagnostic information corresponding to the image.
8. An image processing apparatus, characterized in that the apparatus comprises:
the receiving module is used for receiving the response data of the film reading sent by the server; the film reading response data is obtained by the server through calculation of an image based on a film reading operation instruction of a user; the reading response data comprises image data and non-image data;
the merging module is used for merging the image data and the non-image data to obtain an updated image;
and the display module is used for displaying the updated image on a reading interface.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202011288760.9A 2020-11-17 2020-11-17 Image processing method, image processing device, computer equipment and storage medium Pending CN112330707A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011288760.9A CN112330707A (en) 2020-11-17 2020-11-17 Image processing method, image processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011288760.9A CN112330707A (en) 2020-11-17 2020-11-17 Image processing method, image processing device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112330707A true CN112330707A (en) 2021-02-05

Family

ID=74322390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011288760.9A Pending CN112330707A (en) 2020-11-17 2020-11-17 Image processing method, image processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112330707A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113961124A (en) * 2021-09-27 2022-01-21 上海联影医疗科技股份有限公司 Medical image display method, medical image display device, computer equipment and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101192945A (en) * 2006-11-23 2008-06-04 中兴通讯股份有限公司 A device status display system and method
CN101729879A (en) * 2009-12-15 2010-06-09 山东大学 Method for realizing real-time video transmission based on MIMO-OFDM system
CN101777012A (en) * 2009-12-31 2010-07-14 深圳市蓝韵实业有限公司 Three-dimensional image server multi-task managing and scheduling method
CN101998045A (en) * 2009-08-11 2011-03-30 佛山市顺德区顺达电脑厂有限公司 Image processing device capable of synthesizing scene information
CN102088575A (en) * 2009-12-03 2011-06-08 深圳市华普电子技术有限公司 Caption displaying player and caption displaying method thereof
CN102111631A (en) * 2009-12-28 2011-06-29 索尼公司 Image processing device, image processing method, and program
KR20110088778A (en) * 2010-01-29 2011-08-04 주식회사 팬택 Terminal and method for providing augmented reality
CN103577699A (en) * 2013-11-14 2014-02-12 哈尔滨工程大学 DICOM medical image displaying and processing method based on Android platform
US20140327698A1 (en) * 2013-05-06 2014-11-06 Nvidia Corporation System and method for hybrid graphics and text rendering and client computer and graphics processing unit incorporating the same
CN105190704A (en) * 2013-05-09 2015-12-23 三星电子株式会社 Method and apparatus for providing contents including augmented reality information
CN110197715A (en) * 2019-05-19 2019-09-03 复旦大学附属华山医院 A kind of medical image browsing system for read tablet teaching
CN110377263A (en) * 2019-07-17 2019-10-25 Oppo广东移动通信有限公司 Image composition method, device, electronic equipment and storage medium
CN110598025A (en) * 2019-08-13 2019-12-20 武汉联影医疗科技有限公司 Film reading method, system, device, equipment and storage medium
CN111163362A (en) * 2019-12-30 2020-05-15 北京佳讯飞鸿电气股份有限公司 Video receiving method and system capable of self-adapting retransmission waiting time
CN111159598A (en) * 2019-12-26 2020-05-15 武汉联影医疗科技有限公司 Image browsing method and device, computer equipment and storage medium
CN111696087A (en) * 2020-05-29 2020-09-22 曹怡珺 Medical image deformation and labeling method

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101192945A (en) * 2006-11-23 2008-06-04 中兴通讯股份有限公司 A device status display system and method
CN101998045A (en) * 2009-08-11 2011-03-30 佛山市顺德区顺达电脑厂有限公司 Image processing device capable of synthesizing scene information
CN102088575A (en) * 2009-12-03 2011-06-08 深圳市华普电子技术有限公司 Caption displaying player and caption displaying method thereof
CN101729879A (en) * 2009-12-15 2010-06-09 山东大学 Method for realizing real-time video transmission based on MIMO-OFDM system
CN102111631A (en) * 2009-12-28 2011-06-29 索尼公司 Image processing device, image processing method, and program
CN101777012A (en) * 2009-12-31 2010-07-14 深圳市蓝韵实业有限公司 Three-dimensional image server multi-task managing and scheduling method
KR20110088778A (en) * 2010-01-29 2011-08-04 주식회사 팬택 Terminal and method for providing augmented reality
US20140327698A1 (en) * 2013-05-06 2014-11-06 Nvidia Corporation System and method for hybrid graphics and text rendering and client computer and graphics processing unit incorporating the same
CN105190704A (en) * 2013-05-09 2015-12-23 三星电子株式会社 Method and apparatus for providing contents including augmented reality information
CN103577699A (en) * 2013-11-14 2014-02-12 哈尔滨工程大学 DICOM medical image displaying and processing method based on Android platform
CN110197715A (en) * 2019-05-19 2019-09-03 复旦大学附属华山医院 A kind of medical image browsing system for read tablet teaching
CN110377263A (en) * 2019-07-17 2019-10-25 Oppo广东移动通信有限公司 Image composition method, device, electronic equipment and storage medium
CN110598025A (en) * 2019-08-13 2019-12-20 武汉联影医疗科技有限公司 Film reading method, system, device, equipment and storage medium
CN111159598A (en) * 2019-12-26 2020-05-15 武汉联影医疗科技有限公司 Image browsing method and device, computer equipment and storage medium
CN111163362A (en) * 2019-12-30 2020-05-15 北京佳讯飞鸿电气股份有限公司 Video receiving method and system capable of self-adapting retransmission waiting time
CN111696087A (en) * 2020-05-29 2020-09-22 曹怡珺 Medical image deformation and labeling method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113961124A (en) * 2021-09-27 2022-01-21 上海联影医疗科技股份有限公司 Medical image display method, medical image display device, computer equipment and storage medium
CN113961124B (en) * 2021-09-27 2024-02-27 上海联影医疗科技股份有限公司 Medical image display method, medical image display device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111192356B (en) Method, device, equipment and storage medium for displaying region of interest
US20200082571A1 (en) Method and apparatus for calibrating relative parameters of collector, device and storage medium
US11900594B2 (en) Methods and systems for displaying a region of interest of a medical image
CN111198739B (en) Application view rendering method, device, equipment and storage medium
CN111063422A (en) Medical image labeling method, device, equipment and medium
KR20200092466A (en) Device for training analysis model of medical image and training method thereof
CN111916184B (en) Medical examination image downloading method and device and computer equipment
CN111159598A (en) Image browsing method and device, computer equipment and storage medium
CN112330707A (en) Image processing method, image processing device, computer equipment and storage medium
CN112530549B (en) Image display method, device and computer equipment
JP7187608B2 (en) Apparatus and method for visualizing digital chest tomosynthesis and anonymized display data export
US20200176096A1 (en) Medical information device, medical information system, and method for medical information processing
CN113744843A (en) Medical image data processing method and device, computer equipment and storage medium
CN111653330B (en) Medical image display and diagnostic information generation method, system, terminal and medium
CN110600099A (en) Electronic report display method, system, device, equipment and storage medium
CN116259403A (en) Diagnostic data display method, computer device, and storage medium
CN113742506A (en) Image display method and computer equipment
CN109799936B (en) Image generation method, device, equipment and medium
CN114496175A (en) Medical image viewing method, device, equipment and storage medium
JP2019105921A (en) Image interpretation report creation support apparatus and image interpretation report creation support method
JP7296941B2 (en) Viewing medical images
JP7216660B2 (en) Devices, systems, and methods for determining reading environments by synthesizing downstream needs
CN111651131B (en) Image display method and device and computer equipment
CN112509674A (en) Method, system and display method for downloading DICOM medical image data
US20210125704A1 (en) Techniques for displaying medical image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination