CN107596578A - The identification and location determining method of alignment mark, imaging device and storage medium - Google Patents

The identification and location determining method of alignment mark, imaging device and storage medium Download PDF

Info

Publication number
CN107596578A
CN107596578A CN201710859487.2A CN201710859487A CN107596578A CN 107596578 A CN107596578 A CN 107596578A CN 201710859487 A CN201710859487 A CN 201710859487A CN 107596578 A CN107596578 A CN 107596578A
Authority
CN
China
Prior art keywords
alignment mark
image
initial alignment
mark
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710859487.2A
Other languages
Chinese (zh)
Other versions
CN107596578B (en
Inventor
荣成城
周强强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN201710859487.2A priority Critical patent/CN107596578B/en
Publication of CN107596578A publication Critical patent/CN107596578A/en
Application granted granted Critical
Publication of CN107596578B publication Critical patent/CN107596578B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present invention relates to a kind of location determining method of alignment mark, including:Obtain the scanned image sequence of the person under inspection under initial alignment mark state;Isocenter point under initial alignment mark state is determined according to the scanned image sequence, including:Identify the initial alignment mark in each layer scan image in the scanned image sequence;The isocenter point is determined with according to the initial alignment mark;The central point of destination object is calculated according to the scanned image sequence using neural network model;And the position of the alignment mark is determined according to the spatial offset of the isocenter point and the central point of the destination object.The above method is traveled through to scanned image sequence manually without staff marks initial alignment mark to identify, will not introduce human error, can improve the degree of accuracy of mark.The invention further relates to a kind of imaging device, storage medium and a kind of recognition methods of alignment mark.

Description

The identification and location determining method of alignment mark, imaging device and storage medium
Technical field
The present invention relates to technical field of medical equipment, identification and position determination side more particularly to a kind of alignment mark Method, imaging device and storage medium.
Background technology
Need to carry out pendulum bit manipulation before radiotherapy, to cause destination object to be centrally located in the grade of radiotherapy equipment At heart point.The joint of X ray beam centre on the isocenter point i.e. different directions.Closer to suffered by isocenter point The intensity of x-ray bombardment is bigger.Therefore, it is necessary to paste reference point of the lead point as alignment on person under inspection.It is traditional it is determined that , it is necessary to which doctor successively browses CT image sequences is searched during the position of lead point, with realize identification to alignment mark and Mark.This process is very cumbersome, and is readily incorporated human error, the accuracy of reduction flag.
The content of the invention
Based on this, it is necessary to provide a kind of knowledge method for distinguishing of the alignment mark for the accuracy that can improve mark, imaging Equipment and storage medium.
A kind of location determining method of alignment mark, including:
Obtain the scanned image sequence of the person under inspection under initial alignment mark state;
Isocenter point under initial alignment mark state is determined according to the scanned image sequence, including:
Identify the initial alignment mark in each layer scan image in the scanned image sequence;With
The isocenter point is determined according to the initial alignment mark;
The central point of destination object is calculated according to the scanned image sequence using neural network model;And
The alignment mark is determined according to the spatial offset of the isocenter point and the central point of the destination object Position.
The location determining method of above-mentioned alignment mark, it is each initial right to be identified in the scanned image sequence got Fiducial mark is remembered and determines isocenter point according to each initial alignment mark;The above method is also using neural network model according to scanning figure As sequence calculates the central point of destination object, so as to determine the position of alignment mark according to the spatial offset of the two.It is above-mentioned Method is traveled through to scanned image sequence manually without staff marks initial alignment mark to identify, will not introduce artificial Error, the degree of accuracy of mark can be improved.
In one of the embodiments, the initial alignment in the identification scanned image sequence in each layer scan image The step of mark, includes, and following steps are performed to each layer scan image in the scanned image sequence:
The profile of the person under inspection in scan image is obtained, the profile is made up of profile point;
The image-region where the profile is divided into multiple sub-image areas along the profile, each subgraph Region comprises at least partial contour;And
Each sub-image area is identified successively and determines the initial alignment mark.
In one of the embodiments, it is described that the image-region where the profile is divided into more height along the profile In the step of image-region, the central point using profile point as sub-image area is divided, and the adjacent sub-image area The distance between central point be more than the size of the initial alignment mark.
In one of the embodiments, it is described that each sub-image area is identified successively and determined described initially to fiducial mark The step of note, includes:
Each sub-image area is identified successively and sub-image area is marked simultaneously when recognizing alignment mark The quantity of alignment mark is added one;
Judge whether the quantity of the alignment mark is more than or equal to desired value;
When the quantity of the alignment mark is more than or equal to desired value, terminate to each sub-image regions in the scan image The scanning recognition in domain.
In one of the embodiments, when each sub-image area is identified, judge in the sub-image area Whether the gray scale of each pixel is more than gray threshold, and pixel region of the gray scale more than gray threshold is identified as into alignment mark.
In one of the embodiments, it is described that each sub-image area is identified successively and determined described initially to fiducial mark After the step of note, before described the step of determining the isocenter point according to the initial alignment mark, in addition to:By described in The quantity of alignment mark is more than or equal to the scan image of desired value as targeted scans image;
Described the step of determining the isocenter point according to the initial alignment mark, is, according to the targeted scans image In initial alignment mark determine the isocenter point.
In one of the embodiments, the initial alignment mark in the targeted scans image determines described etc. The step of central point, includes:
Obtain position of each initial alignment mark in corresponding sub-image area in the targeted scans image;
Position relationship based on sub-image area and targeted scans image calculates each initial alignment mark in targeted scans Position in image;And
The locus of the isocenter point is calculated according to each initial position of the alignment mark in targeted scans image.
In one of the embodiments, each sub-image area is identified successively and determines the initial alignment mark Step includes:
Each sub-image area is identified successively and each sub-image area is marked when recognizing alignment mark Processing;
Calculate position of the mark zone in corresponding scan image;
Judge to whether there is two mark zones that distance is less than initial alignment mark size in the scan image;
If two mark zones that distance is less than initial alignment mark size be present, distance is less than initial alignment mark After two mark zones are incorporated as a mark zone, using the average value of the position of two mark zones as the mark zone after merging Position;And return to perform in the judgement scan image after merging and be less than initial alignment mark size with the presence or absence of distance Two mark zones the step of;
If being less than two mark zones of initial alignment mark size in the absence of distance, the mark zone is identified as initially Alignment mark.
In one of the embodiments, in addition to obtain scanning bed residing for person under inspection under initial alignment mark state Locus the step of;
It is described determined according to the scanned image sequence under initial alignment mark state isocenter point the step of in, according to Locus corresponding to each scan image in the scanned image sequence determines to use with the scanning bed spatial relation In it is determined that the scan image of isocenter point corresponding to initial alignment mark.
In one of the embodiments, it is described that target is calculated according to the scanned image sequence using neural network model The step of central point of object, includes:
The scanned image sequence is inputted into the neural network model, known automatically by the neural network model Profile information of the other targeted object region in corresponding scan image simultaneously calculates the center of destination object according to the profile information Point.
In one of the embodiments, also the neural network model is carried out using the image with markup information The step of training;The markup information includes at least one characteristic information;The characteristic information includes the profile information after segmentation.
A kind of imaging device, including:
Scanning means, for gathering the scanned image sequence of the person under inspection under initial alignment mark state;And
Processor, it is connected with the scanning means, for obtaining the scanning of the person under inspection under initial alignment mark state Image sequence;
The processor is additionally operable to determine the isocenter point under initial alignment mark state according to the scanned image sequence, Including:Identify the initial alignment mark in each layer scan image in the scanned image sequence;With according to described initially to fiducial mark Note determines the isocenter point;
The processor is additionally operable to calculate destination object according to the scanned image sequence using neural network model Central point, and the alignment mark is determined according to the spatial offset of the isocenter point and the central point of the destination object Position.
A kind of storage medium, computer program is stored thereon with, can be used for performing such as when described program is executed by processor The step of method described in foregoing any embodiment.
A kind of recognition methods of alignment mark, including:
Obtain the scan image of the person under inspection under alignment mark state;And
Initial alignment mark in scan image described in automatic identification, including:Judge each pixel in the scan image Gray scale whether be more than gray threshold, and the pixel region that gray scale is more than to gray threshold is identified as alignment mark.
Brief description of the drawings
Fig. 1 is the flow chart of the location determining method of the alignment mark in an embodiment;
Fig. 2 is the schematic diagram of the initial alignment mark state in an embodiment;
Fig. 3 is the flow charts of step S122 in one embodiment in Fig. 1;
Fig. 4 is the flow charts of step S330 in one embodiment in Fig. 3;
Fig. 5 is the flow chart for the position that alignment mark is obtained in an embodiment;
Fig. 6 is the flow charts of step S330 in another embodiment in Fig. 3;
Fig. 7 is the schematic diagram of the reference chart spectral sequence with markup information in an embodiment;
Fig. 8 is the schematic diagram for establishing process of the neural network model in an embodiment;
Fig. 9 is the flow chart of the recognition methods of the alignment mark in an embodiment.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
The location determining method of alignment mark in one embodiment, can be used in imaging device.The imaging device can be with It is CT scan equipment, but not limited to this.In specific embodiment part, only it is introduced by taking CT scan equipment as an example, but it is unlimited The protection content of the fixed present invention.The imaging device can be used in radiotherapy treatment planning system or radiotherapy equipment.One In embodiment, radiotherapy treatment planning system and radiotherapy equipment are separate equipment.In other examples, radiate Treatment planning systems can also be integrated in radiotherapy equipment.Alignment mark is pasted onto on the body of person under inspection, as imaging The pendulum potential reference point of equipment and radiotherapy equipment.During position is put, it can cause on person under inspection's body by the reference point Destination object central point and imaging device and radiotherapy equipment isocenter point a position.Fig. 1 is an embodiment In alignment mark location determining method flow chart, this method comprises the following steps:
Step S110, obtain the scanned image sequence of the person under inspection under initial alignment mark state.
The scanned image sequence of person under inspection under initial alignment mark state can pass through the scanning means in imaging device Shooting obtains, namely shoots in obtained scanned image sequence and include initial alignment mark.In one embodiment, can pass through Spiral scan pattern gets the scanned image sequence.Initial alignment mark is rule of thumb substantially judged to be examined by staff Target object position in person's body, assume Laser Focusing in target pair thereby using for example outside laser lamp system of external positioning systems As position, now each lamp beam on laser lamp can project corresponding focus in person under inspection's body surface, and it is right that alignment mark is then pasted onto this At the focal position answered, now, alignment mark instruction imaging device in scanning means etc. center, carry out in this case Image scanning obtains scanned image sequence.Generally, the quantity of alignment mark and the quantity of the laser beam on outside laser lamp system It is identical.In the present embodiment, outside laser lamp system is provided with three lasing light emitters, so as to three focuses corresponding to being formed by Inspection person's body surface.Three lasing light emitters throw three beams cross laser beam, cross from scanning bed top, left side and three, right side direction respectively The position that the central point of laser beam is projected in person under inspection's body surface is the focus of the laser beam.Three beams of laser beam is usually located at one In CT aspects, and an equilateral triangle is generally comprised, using the side that the focus of the left and right sides is formed as base, the midpoint on base is For the isocenter point of initial alignment mark, as shown in Figure 2.In Fig. 2,200 represent alignment mark, and 20 represent position laser beam, O tables Show isocenter point.Isocenter point O usually requires to overlap with the isocenter point of radiotherapy equipment, obtains destination object most Dose of radiation with reach destroy destination object tissue purpose.
Step S120, the isocenter point under initial alignment mark state is determined according to scanned image sequence.
Alignment mark has high CT values relative to person under inspection.For example, alignment mark can use lead material to form lead point Structure.The CT values of alignment mark are far above the regular tissue of person under inspection, and its CT value can be 2000~4000, so as to scan Image sequence (or on CT images) shows as the higher bright spot of gray value.Also, because alignment mark is pasted onto person under inspection more At body surface and nearer apart from skin, the air of surrounding or person under inspection tissue density are generally relatively low, therefore alignment mark is in image In gray value and periphery pixel comparison it is obvious, be easy to realize identification to alignment mark based on gray value.
By carrying out automatic identification to the initial alignment mark in each scan image in scanned image sequence, then according to knowledge Other result determines to include the scan image for determining the initial alignment mark required for isocenter point, and then according to each initial alignment The position in the scan image is marked to determine its corresponding isocenter point.Step S120 includes step S122~step S124。
Step S122, identify the initial alignment mark in each layer scan image in scanned image sequence.
Each scan image both corresponds to the different tomographies of person under inspection in scanned image sequence, so that in each scan image Comprising preliminary sweep mark state it is different.Therefore, it is necessary to be carried out to each layer scan image in scanned image sequence initial The identification of alignment mark.Specifically, the image-region for being higher than gray threshold in scan image with the presence or absence of gray value is judged, if Know automatically in the presence of can then determine to include alignment mark in the scan image, and by image-region of the gray scale higher than gray threshold Wei not alignment mark.Gray threshold determines according to gray value of the alignment mark in scan image, namely could be arranged to be aligned The CT values of mark.
Step S124, isocenter point is determined according to each initial alignment mark.
It can determine to use according to the situation of the initial alignment mark recognized in each layer scan image in scanned image sequence The center such as finally to determine in each initial alignment mark it is determined that the scan image of isocenter point, and in the scan image Point.As previously described, because the relative position relation of each laser beam understands in external positioning systems such as outside laser lamp system, so as to , can be according to the position of initial alignment mark it is determined that behind the position of its focus for being formed on person under inspection namely initial alignment mark The position for determining isocenter point is put, as shown in Figure 2.Specifically, isocenter point is first calculated in the scan image at place Position, and locus determines according to corresponding to position of the isocenter point in the scan image at place and the scan image The locus of the isocenter point.In another embodiment, the coordinate system of scan image, frame coordinate system and scanning bed coordinate system For the same coordinate system, therefore coordinate position of the isocenter point calculated in the scan image at place can be directly as it Locus without doing further evolution again.
It can be realized to the automatic identification of initial alignment mark by above-mentioned steps and it is determined that after targeted scans image, root The isocenter point under initial alignment mark state is calculated according to the position of each initial alignment mark in scan image.Said process Traverse scanning image sequence is gone not introduce artificial mistake to complete the identification to initial alignment mark manually without staff Difference, be advantageous to improve the accuracy of mark.
Step S130, the central point of destination object is calculated according to scanned image sequence using neural network model.
First determine targeted object region in corresponding scan image in scanned image sequence using neural network model In profile information, so as to calculate the central point of destination object according to the profile information of targeted object region determined. Neural network model can pre-establish, so as to which the scanned image sequence of acquisition directly is inputted into the neural network model In, so as to realize the automatic identification of the profile information to targeted object region in corresponding scan image, and according to outline identification As a result the central point of destination object is calculated.
Likewise, during the central point of destination object is calculated, it can first determine the central point of destination object right The position in scan image is answered, the destination object is then determined according to the locus of the position and corresponding scan image Central point locus.When scanning image coordinate system, frame coordinate system and scanning bed coordinate system are the same coordinate system, Locus that can also be directly using its coordinate position in scan image as the central point of the destination object.
Step S140, the position of alignment mark is determined according to the spatial offset of isocenter point and the central point of destination object Put.
The sky of the two can determine that according to the isocenter point determined and the locus of the central point of destination object Between offset.After the spatial offset of the two is determined, you can the position of alignment mark is determined according to the offset. Specifically, after the spatial offset of the two is obtained, the scanning bed movement spatial offset is controlled, so that destination object Central point is moved at the isocenter point of imaging device.Now external positioning systems laser beam as caused by outside laser lamp system The focus formed on person under inspection's body is the correct position of alignment mark.In one embodiment, can be according to each initial right Determine each initial alignment mark in the locus of position and the targeted scans image of the fiducial mark note in targeted scans image Locus, then will be i.e. available after the spatial deviation spatial offset it needs to be determined that alignment mark position.
The location determining method of above-mentioned alignment mark, it is each initial right to be identified in the scanned image sequence got Fiducial mark is remembered and determines isocenter point according to each initial alignment mark;The above method is also using neural network model according to scanning figure As sequence calculates the central point of destination object, so as to determine the position of alignment mark according to the spatial offset of the two.It is above-mentioned Method is traveled through to scanned image sequence and identifies the initial alignment mark of mark manually without staff, will not be introduced artificial Error, the degree of accuracy of mark can be improved.
In one embodiment, in step S122, each layer scan image in image scanning sequence is performed as shown in Figure 3 The step of:
Step S310, obtain the profile of the person under inspection in scan image.
The profile got is made up of profile point, namely when the profile of the person under inspection in obtaining scan image, wheel Wide point set automatically obtains.Specifically, for scanning the scanned image sequence got, partitioning into skin algorithm is called, obtains each layer Occluding contour (two-dimensional coordinate point set) on scan image.Because person under inspection's skin is mostly curve, therefore the occluding contour For Confined outline curve.
Step S320, the image-region where profile is divided into multiple sub-image areas along the profile, it is each described Sub-image area comprises at least partial contour.
In cutting procedure, only the image-region where contour line can be divided, without being carried out to other regions Division, so as to realize the identification to initial alignment mark using contour line as path, search needed for identification can be greatly reduced Scope, improve treatment effeciency.In one embodiment, face of the area sum of each sub-image area less than the scan image where it Product size, so that it is guaranteed that whole scan image need not be identified in the identification process to initial alignment mark, improve place Manage efficiency.In one embodiment, in partition process, the central point using profile point as sub-image area is to where contour line Image-region is divided.Also, the distance between adjacent central point of sub-image area is more than the big of initial alignment mark It is small, so that reducing overlapping region between two neighboring sub-image area as far as possible, and then improve processing speed.Sub-image area Size can equally be determined according to size of the initial alignment mark in scan image.In one embodiment, each subgraph The size in region could be arranged to 20X20 pixels.
Step S330, each sub-image area is identified successively and determines initial alignment mark.
After the image-region where to contour line is split, the sub-image area after segmentation is scanned successively with Identify whether the sub-image area contains alignment mark., can be with where a certain profile point in contour line in scanning process Sub-image area be starting point, each sub-image area is scanned successively clockwise or counter-clockwise along the contour line.At it In his embodiment, each sub-image area can also be progressively scanned.Judging the gray scale of each pixel of sub-image area is It is no to be more than gray threshold, so as to determine to include alignment mark in the sub-image area when more than gray threshold.
In other embodiments, scan image can also be divided according to the step in non-Fig. 3, or not to scanning Image is divided.
In one embodiment, step S330 can be realized using flow as shown in Figure 4, including following sub-step:
S410, each sub-image area is identified successively and sub-image area marked when recognizing alignment mark Remember and the quantity of alignment mark is added one.
When including alignment mark in identifying the sub-image area, the sub-image area is marked.It is real one Apply in example, sub-image area can be directed to and carry out binary conversion treatment.For example, when the pixel grey scale of sub-image area is less than gray scale threshold During value, the value of the pixel is taken 0, it is on the contrary then take 1.In other examples, can also be distinguished using other labeling methods Whether sub-image area includes alignment mark.When recognizing the sub-image area and including alignment mark, by respective layer The quantity of alignment mark in scan image adds one.When it is unidentified arrive alignment mark when, then continue to enter next sub-image area Row scanning recognition, until completing the scanning recognition of all sub-image areas in the tomographic image.Therefore, complete to scan the layer After the identification of image, you can count the quantity for the alignment mark that this layer is included.
Step S420, judges whether the quantity of the alignment mark is more than or equal to desired value.
When the quantity for judging the alignment mark is more than or equal to desired value, you can it is target to determine the scan image Scan image.Identical criterion can be used in step S124.When the quantity for judging alignment mark is more than or equal to During desired value, step S440 is performed, otherwise performs step S430.
Step S430, judge whether to complete the identification to all sub-image areas in Current Scan image.
If completing the identification to all sub-image areas in Current Scan image, step S440 is performed, is otherwise returned Receipt row step S410.
Step S440, terminate the scanning recognition to each sub-image area in the scan image.
The quantity of alignment mark in the scan image is judged has reached desired value namely has been capable of determining that initial During isocenter point under alignment mark state, without continuing to be scanned follow-up sub-image area, to save scanning recognition Time, improve whole treatment effeciency.
In one embodiment, after alignment mark is included in identifying some sub-image area, namely after step S410, The step of position for obtaining the alignment mark can also be performed.The step is specific as shown in figure 5, comprising the following steps:
Step S510, obtain the minimum value and maximum of pixel that all values in the sub-image area are 1 in lateral coordinates Value.
In the present embodiment, each scan image is provided with the two-dimensional coordinate system of itself, corresponding each sub-image regions Domain is also equipped with the two-dimensional coordinate system of itself.Two-dimensional coordinate system in each sub-image area relative to scan image coordinate system Relation understand, so as to be got according to point coordinates in sub-image area corresponding to the point coordinates on scan image. In the present embodiment, the X-axis coordinate of lateral coordinates namely the coordinate system corresponding to sub-image area, longitudinal coordinate correspond to subgraph As region coordinate system in Y-axis coordinate.The minimum value of lateral coordinates is X_min, and lateral coordinates maximum is X_max.
Step S520, obtain the minimum value and maximum of pixel that all values in the sub-image area are 1 in longitudinal coordinate Value.
The minimum value of longitudinal coordinate is Y_min, and longitudinal coordinate maximum is Y_max.
Step S530, subgraph is determined according to lateral coordinates minimum value and maximum, longitudinal coordinate minimum value and maximum The coordinate of alignment mark in region.
The lateral coordinates of alignment mark are:(X_min+X_max)/2.
The longitudinal coordinate of alignment mark is:(Y_min+Y_max)/2.
Step S540, based on the coordinate transform of sub-image area to scan image, alignment mark is calculated in scan image In position.
In other examples, sub-image area, the coordinate system of scan image and scanning bed coordinate system are identical, because This coordinate of alignment mark got in sub-image area is its coordinate in scan image, is also corresponding to it Locus, namely it need not perform step S540.
In one embodiment, can also be realized in step S330 by flow as shown in Figure 6, including following step Suddenly:
Step S602, each sub-image area is identified successively and when recognizing alignment mark to each sub-image area Processing is marked.
To each sub-image area, progressively or column by column judges whether the gray scale of each pixel is more than gray threshold, and in pixel Gray scale when being more than gray threshold, the pixel is marked.In one embodiment, sub-image area can be directed to and carries out two-value Change is handled.For example, when the pixel grey scale of sub-image area is less than gray threshold, the value of the pixel is taken 0, it is on the contrary then take 1. In other embodiments, other label symbols can also be used to be more than the pixel of gray threshold to pixel grey scale and be marked. Therefore, when including part or complete initial alignment mark in sub-image area, it certainly exists labeled picture Element.Each labeled pixel forms the office that a mark zone forms a complete initial alignment mark or initial alignment mark Portion.
Step S604, calculate position of the mark zone in corresponding scan image.
Specifically, for labeled sub-image area, obtain markd pixel (namely mark zone) abscissa Minimum value X_min and maximum X_max, and obtain the ordinate of markd pixel minimum value Y_min and maximum Y_max, so as to calculate coordinate of the alignment mark in the sub-image area according to the value got.
The lateral coordinates of alignment mark are:(X_min+X_max)/2.
The longitudinal coordinate of alignment mark is:(Y_min+Y_max)/2.
In one embodiment, sub-image area, the coordinate system of scan image and scanning bed coordinate system are identical, therefore obtain Coordinate of the alignment mark got in sub-image area is its coordinate in scan image, is also its corresponding space Position.In other examples, sub-image area and scan image have different coordinate systems, so as to get to fiducial mark Note will also carry out coordinate transform according to the coordinate corresponding relation of the two and be existed with obtaining alignment mark after the coordinate of sub-image area Coordinate in scan image.
Step S606, judge to whether there is two mark zones that distance is less than initial alignment mark size in scan image.
When an initial alignment mark is divided in two or more sub-image areas, an initial alignment mark It is identified in scan image for two or more mark zones, so as to need that processing is merged to the two mark zones, To cause the quantity of the initial alignment mark in the scan image finally identified and position accurate.When being deposited in scan image When distance is less than two mark zones of initial alignment mark size, step S608 is performed, otherwise performs step S610.
Step S608, two mark zones that distance is less than to initial alignment mark are incorporated as a mark zone, and by two Position of the average value of the position of individual mark zone as the mark zone after merging.
After completing to the merging of mark zone, return and perform step S606, until any two is not present in scan image Distance is less than the mark zone of initial alignment mark size, and ensures that each mark zone represents one completely initially to fiducial mark Note.
Step S610, the mark zone is identified as initial alignment mark.
In one embodiment, also include between step S122 and step S124, the quantity of alignment mark is more than or waited In the scan image of desired value is as targeted scans image the step of.That is, after step S122 is completed, each scan image is judged In the quantity of initial alignment mark whether be more than or equal to desired value, so as to which alignment mark quantity is more than or equal into desired value Scan image as targeted scans image.The quantity of initial alignment mark in each scan image is compared with desired value Compared with so as to determine that the layer is targeted scans image when the initial alignment mark is more than or equal to the desired value.Desired value can With the total quantity equal to the laser beam used in outside laser lamp system.In the present embodiment, desired value 3, initially to fiducial mark When the quantity of note is equal to 3, then the scan image can be defined as to targeted scans image.In other examples, it is outside to swash Lamp system can be positioned using multiple laser beams, so as to determine the desired value according to actual conditions.Desired value Setting is defined by the isocenter point being finally capable of determining that under initial alignment mark state.
When the quantity for judging the initial alignment mark in scan image is more than or equal to desired value, by the scan image As targeted scans image.The center such as it is then each initial alignment mark in targeted scans image in step S124 to determine Point.When multiple targeted scans images be present, the coordinate for the isocenter point determined to each targeted scans image is asked for putting down Coordinate of the average as final isocenter point.In ordinary circumstance, targeted scans image is 1~2, the thickness depending on CT.
In one embodiment, the step of initial alignment mark in each layer scan image in identifying scanned image sequence In, namely the quantity of initial alignment mark can be counted in step S122 simultaneously, so as to judge current scan image Initial alignment mark quantity be more than or equal to desired value, after also may act as targeted scans image, that is, terminate to scanning figure As the identification process of sequence, the directly initial alignment mark in the targeted scans image determines isocenter point, so as to Improve treatment effeciency.
In one embodiment, in addition to scanning bed space residing for person under inspection under initial alignment mark state is obtained The step of position.Scanning bed locus can be read at any time by hardware interface, and every in scanned image sequence Scan image all has corresponding locus.Therefore, after the stickup that staff completes initial alignment mark, it is manually pressed by " pendulum position confirms " button on machine, the scanning bed locus (bed value can also be referred to as) of triggering system record instantly. Now, when performing step S120, without being scanned to all scan images in scanned image sequence, it is only necessary to according to each Locus corresponding to scan image and scanning bed locus determine to need the scan image layer being identified.For example, can To be only scanned identification to the scan image in the pre-set space distance range of scanning bed locus, with determination etc. The position of heart point, so as to be substantially reduced the hunting zone of algorithm, reach the purpose of lifting processing speed.The pre-set space is apart from model Enclosing can be that edge enters bed or moves back the preset range in a direction.In one embodiment, obtain under initial alignment mark state by The step of scanning bed locus residing for inspection person, also performs in the lump in step s 110.
In one embodiment, step S130 passes through god specifically, scanned image sequence is inputted into neural network model Calculated through profile information of the network model automatic identification targeted object region in corresponding scan image and according to the profile information Go out the central point of destination object.
In one embodiment, before step S130 is performed, it is also necessary to perform using the image with markup information to god The step of being trained through network model.Image with markup information be with the profile information manually delineated by expert and The sufferer image of the information such as mark.Wherein markup information includes at least one characteristic information.After characteristic information can include segmentation Profile information.
In one embodiment, the image with markup information can also by by patient image with markup information ginseng Examine after collection of illustrative plates sequence carries out registration and obtain, namely utilizing what the image with markup information was trained to neural network model Before step, it is also necessary to carry out the preprocessing process of data.Reference chart spectral sequence typically using the data information of normal patient as according to According to being obtained.In the present embodiment, the reference chart spectral sequence be by a large amount of patient datas it is analysis integrated after obtained target Object diagram spectral sequence.Markup information is comprised at least by the profile information (being referred to as segmentation information) after splitting.Profile information Accurate profile information can be included simultaneously and surround profile information, as shown in Figure 7.Therefore, the feature letter in reference chart spectral sequence Breath comprises at least profile information and half-tone information.Characteristic information is more, and recognition result is more accurate.Due to patient image and reference chart Had differences between spectral sequence, the body posture when details of morphology of such as histoorgan, scanning.If will be by with reference to collection of illustrative plates Segmentation information and mark in sequence etc. are, it is necessary to the two is unified in same coordinate system, namely carry out registration.It is real one Apply in example, patient image and reference chart spectral sequence can be subjected to nonlinear spatial alternation, namely non-rigid registration.By patient The data of image are transformed in the coordinate play of reference chart spectral sequence, the figure with markup information after being converted or after registration Picture.In the present embodiment, using similarity measure of the mutual information as non-rigid body model, space change is carried out based on Demons models The restricted model changed, complete overall registration work.After registration, the markup information on reference chart spectral sequence can be answered directly Use in patient's sequence.
In one embodiment, using the step of being trained with the image of markup information to neural network model it Before, it is also necessary to feature extraction is carried out to the image with markup information, so as to which the characteristic extracted is output into foundation In neural network model, to be trained to the neural network model.The training characteristics that can be used can be divided into two major classes:Figure The half-tone information of picture and the positional information of boundary profile.Wavelet decomposition is carried out for gradation of image information, gradation of image is calculated and exists Image average, the variance of each frequency range, and with the characteristic information such as the cross-correlation coefficient of corresponding reference chart spectral sequence decomposition result. Boundary segmentation (as used Canny operators, level set algorithm etc.) is carried out for patient image, removal area is too small, does not close After improper profile, the cross-correlation coefficient of remaining each profile and each profile in reference chart spectral sequence is calculated, between two profiles The similitude of both bigger explanations of coefficient correlation is higher, smaller, shows that similitude is lower.Normal profile and tumour wheel in collection of illustrative plates Wide mark is different, therefore the suspected tumor profile in patient image can be obtained by above-mentioned calculating.Said extracted is come out Characteristic, be input in the neural network model set up.The number of plies is implied in neural network model and is arranged to 3~5 layers, The weights and threshold value of interdependent node are set, export in the feature for input whether include doubtful destination object signal.For sentencing It is set to the boundary profile of suspected target object, calculates its geometric center of gravity, that is, obtains the treatment isocenter point of candidate.
The schematic diagram for establishing process of neural network model in the specific embodiment that Fig. 8 is.In the present embodiment, band The collection of illustrative plates sequence of segmentation information and the collection of illustrative plates sequence of normal morphology refer both to the reference chart spectral sequence with markup information.For no other reason than that Be more focused on during signal characteristic (namely gradation of image distribution characteristics) analysis half-tone information in collection of illustrative plates sequence without Using the markup information to correlation, and the profile information with segmentation information is then needed to use in registration process.
In one embodiment, a kind of imaging device is also provided.The imaging device includes scanning means and processor.Wherein sweep Imaging apparatus is used for the scanned image sequence for gathering the person under inspection under initial alignment mark state.Processor connects with scanning means Connect, for obtaining the scanned image sequence of the person under inspection under initial alignment mark state, and perform such as foregoing any embodiment Step in the location determining method of described alignment mark.After imaging device determines the position of each alignment mark, alignment Mark position will not move, so as to determine pendulum that person under inspection holds in radiotherapy equipment (such as RT) according to the position of alignment mark Position.
Above-mentioned imaging device, the position of alignment mark can be determined by the processing of processor, so as to utilize the alignment Mark realizes the pendulum position alignment of person under inspection, to ensure what the central point of the destination object of person under inspection and alignment mark were determined etc. Central point, namely the position consistency of the isocenter point for the treatment of head, so as to improve therapeutic efficiency and therapeutic effect.
In one embodiment, a kind of storage medium is also provided, is stored thereon with computer program.The program is held by processor The step of during row available for the method as described in foregoing any embodiment is performed.
In one embodiment, a kind of recognition methods of alignment mark, the flow of the recognition methods of the alignment mark are also provided Figure is as shown in figure 9, comprise the following steps:
Step S910, obtain the scan image of the person under inspection under alignment mark state.
Step S920, the alignment mark in automatic identification scan image.
Specifically, judge whether the gray scale of each pixel in scan image is more than gray threshold, and gray scale is more than gray scale The pixel region of threshold value is automatically recognized as alignment mark.
The recognition methods of above-mentioned alignment mark, system can be by judging the gray scale of each pixel in scan image To realize the automatic identification to the alignment mark in image, it is identified manually without technician, improves treatment effeciency, and favorably In error caused by reduction manual operation.
Each technical characteristic of embodiment described above can be combined arbitrarily, to make description succinct, not to above-mentioned reality Apply all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited In contradiction, the scope that this specification is recorded all is considered to be.
Embodiment described above only expresses the several embodiments of the present invention, and its description is more specific and detailed, but simultaneously Can not therefore it be construed as limiting the scope of the patent.It should be pointed out that come for one of ordinary skill in the art Say, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the protection of the present invention Scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (14)

1. a kind of location determining method of alignment mark, including:
Obtain the scanned image sequence of the person under inspection under initial alignment mark state;
Isocenter point under initial alignment mark state is determined according to the scanned image sequence, including:
Identify the initial alignment mark in each layer scan image in the scanned image sequence;With
The isocenter point is determined according to the initial alignment mark;
The central point of destination object is calculated according to the scanned image sequence using neural network model;And
The position of the alignment mark is determined according to the spatial offset of the isocenter point and the central point of the destination object.
2. according to the method for claim 1, it is characterised in that each layer scanning figure in the identification scanned image sequence As in initial alignment mark the step of include, in the scanned image sequence each layer scan image perform following steps:
The profile of the person under inspection in scan image is obtained, the profile is made up of profile point;
The image-region where the profile is divided into multiple sub-image areas along the profile, each sub-image area Including at least partial contour;And
Each sub-image area is identified successively and determines the initial alignment mark.
3. according to the method for claim 2, it is characterised in that it is described along the profile by the image district where the profile Domain was divided into the step of multiple sub-image areas, and the central point using profile point as sub-image area is divided, and adjacent The distance between central point of the sub-image area is more than the size of the initial alignment mark.
4. according to the method for claim 2, it is characterised in that described that each sub-image area is identified and determined successively The step of initial alignment mark, includes:
Each sub-image area is identified successively and sub-image area is marked when recognizing alignment mark and will be right The quantity of fiducial mark note adds one;
Judge whether the quantity of the alignment mark is more than or equal to desired value;
When the quantity of the alignment mark is more than or equal to desired value, terminate to each sub-image area in the scan image Scanning recognition.
5. according to the method for claim 4, it is characterised in that when each sub-image area is identified, described in judgement Whether the gray scale of each pixel in sub-image area is more than gray threshold, and gray scale is more than to the pixel region identification of gray threshold For alignment mark.
6. according to the method for claim 4, it is characterised in that described that each sub-image area is identified and determined successively After the step of initial alignment mark, the described the step of isocenter point is determined according to the initial alignment mark it Before, in addition to:The quantity of the alignment mark is more than or equal to the scan image of desired value as targeted scans image;
Described the step of determining the isocenter point according to the initial alignment mark, is, according in the targeted scans image Initial alignment mark determines the isocenter point.
7. according to the method for claim 6, it is characterised in that the initial alignment in the targeted scans image Mark determines that the step of isocenter point includes:
Obtain position of each initial alignment mark in corresponding sub-image area in the targeted scans image;
Position relationship based on sub-image area and targeted scans image calculates each initial alignment mark in targeted scans image In position;And
The locus of the isocenter point is calculated according to each initial position of the alignment mark in targeted scans image.
8. according to the method for claim 2, it is characterised in that each sub-image area is identified successively and described in determining The step of initial alignment mark, includes:
Each sub-image area is identified successively and processing is marked to each sub-image area when recognizing alignment mark;
Calculate position of the mark zone in corresponding scan image;
Judge to whether there is two mark zones that distance is less than initial alignment mark size in the scan image;
If two mark zones that distance is less than initial alignment mark size be present, distance is less than two of initial alignment mark After mark zone is incorporated as a mark zone, the position using the average value of the position of two mark zones as the mark zone after merging Put;And return to perform in the judgement scan image after merging and be less than initial alignment mark size with the presence or absence of distance The step of two mark zones;
If being less than two mark zones of initial alignment mark size in the absence of distance, the mark zone is identified as initially being aligned Mark.
9. according to the method for claim 1, it is characterised in that also include obtaining being examined under initial alignment mark state The step of scanning bed locus residing for person;
It is described determined according to the scanned image sequence under initial alignment mark state isocenter point the step of in, according to described Locus corresponding to each scan image in scanned image sequence determines to be used for really with the scanning bed spatial relation The scan image of isocenter point corresponding to fixed initial alignment mark.
10. according to the method for claim 1, it is characterised in that described to utilize neural network model according to the scanning figure The step of central point of destination object is calculated as sequence includes:
The scanned image sequence is inputted into the neural network model, passes through the neural network model automatic identification mesh Mark profile information of the subject area in corresponding scan image and the central point of destination object is calculated according to the profile information.
11. according to the method for claim 10, it is characterised in that also using the image with markup information to described The step of neural network model is trained;The markup information includes at least one characteristic information;The characteristic information includes Profile information after segmentation.
A kind of 12. imaging device, it is characterised in that including:
Scanning means, for gathering the scanned image sequence of the person under inspection under initial alignment mark state;And
Processor, it is connected with the scanning means, for obtaining the scan image of the person under inspection under initial alignment mark state Sequence;
The processor is additionally operable to determine the isocenter point under initial alignment mark state according to the scanned image sequence, bag Include:Identify the initial alignment mark in each layer scan image in the scanned image sequence;With according to the initial alignment mark Determine the isocenter point;
The processor is additionally operable to calculate the center of destination object according to the scanned image sequence using neural network model Point, and determine according to the spatial offset of the isocenter point and the central point of the destination object position of the alignment mark Put.
13. a kind of storage medium, is stored thereon with computer program, it is characterised in that can when described program is executed by processor The step of for performing the method as described in claim 1~11 is any.
14. a kind of recognition methods of alignment mark, including:
Obtain the scan image of the person under inspection under alignment mark state;And
Alignment mark in scan image described in automatic identification, including:Judging the gray scale of each pixel in the scan image is It is no to be more than gray threshold, and pixel region of the gray scale more than gray threshold is identified as alignment mark.
CN201710859487.2A 2017-09-21 2017-09-21 Alignment mark recognition method, alignment mark position determination method, image forming apparatus, and storage medium Active CN107596578B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710859487.2A CN107596578B (en) 2017-09-21 2017-09-21 Alignment mark recognition method, alignment mark position determination method, image forming apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710859487.2A CN107596578B (en) 2017-09-21 2017-09-21 Alignment mark recognition method, alignment mark position determination method, image forming apparatus, and storage medium

Publications (2)

Publication Number Publication Date
CN107596578A true CN107596578A (en) 2018-01-19
CN107596578B CN107596578B (en) 2020-07-14

Family

ID=61061910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710859487.2A Active CN107596578B (en) 2017-09-21 2017-09-21 Alignment mark recognition method, alignment mark position determination method, image forming apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN107596578B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108852400A (en) * 2018-07-02 2018-11-23 沈阳东软医疗系统有限公司 A kind of method and device for realizing therapeutic community's location verification
CN109145902A (en) * 2018-08-21 2019-01-04 武汉大学 A method of geometry is identified and positioned using extensive feature
CN109949260A (en) * 2019-04-02 2019-06-28 晓智科技(成都)有限公司 A kind of x optical detector height adjustment progress automatic Image Stitching method
CN110689521A (en) * 2019-08-15 2020-01-14 福建自贸试验区厦门片区Manteia数据科技有限公司 Automatic identification method and system for human body part to which medical image belongs
CN111062390A (en) * 2019-12-18 2020-04-24 北京推想科技有限公司 Region-of-interest labeling method, device, equipment and storage medium
CN111419399A (en) * 2020-03-17 2020-07-17 京东方科技集团股份有限公司 Positioning tracking piece, positioning ball identification method, storage medium and electronic device
CN111515915A (en) * 2019-02-01 2020-08-11 株式会社迪思科 Alignment method
CN112884820A (en) * 2019-11-29 2021-06-01 杭州三坛医疗科技有限公司 Method, device and equipment for training initial image registration and neural network
CN113438960A (en) * 2021-04-02 2021-09-24 复旦大学附属肿瘤医院 Target disposal method and system
CN113520426A (en) * 2021-06-28 2021-10-22 上海联影医疗科技股份有限公司 Coaxiality measuring method, medical equipment rack adjusting method, equipment and medium
CN113546333A (en) * 2020-07-16 2021-10-26 上海联影医疗科技股份有限公司 Isocenter calibration system and method
CN116756045A (en) * 2023-08-14 2023-09-15 海马云(天津)信息技术有限公司 Application testing method and device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101071473A (en) * 2006-03-07 2007-11-14 株式会社东芝 Feature point detector and its method
CN101916443A (en) * 2010-08-19 2010-12-15 中国科学院深圳先进技术研究院 Processing method and system of CT image
CN103829965A (en) * 2012-11-27 2014-06-04 Ge医疗系统环球技术有限公司 Method and device for guiding CT scan through tags
CN104414662A (en) * 2013-09-04 2015-03-18 江苏瑞尔医疗科技有限公司 Position calibration and error compensation device of imaging equipment and compensation method of position calibration and error compensation device
CN105455830A (en) * 2014-09-29 2016-04-06 西门子股份公司 Method for selecting a recording area and system for selecting a recording area
CN105678272A (en) * 2016-03-25 2016-06-15 符锌砂 Complex environment target detection method based on image processing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101071473A (en) * 2006-03-07 2007-11-14 株式会社东芝 Feature point detector and its method
CN101916443A (en) * 2010-08-19 2010-12-15 中国科学院深圳先进技术研究院 Processing method and system of CT image
CN103829965A (en) * 2012-11-27 2014-06-04 Ge医疗系统环球技术有限公司 Method and device for guiding CT scan through tags
CN104414662A (en) * 2013-09-04 2015-03-18 江苏瑞尔医疗科技有限公司 Position calibration and error compensation device of imaging equipment and compensation method of position calibration and error compensation device
CN105455830A (en) * 2014-09-29 2016-04-06 西门子股份公司 Method for selecting a recording area and system for selecting a recording area
CN105678272A (en) * 2016-03-25 2016-06-15 符锌砂 Complex environment target detection method based on image processing

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108852400A (en) * 2018-07-02 2018-11-23 沈阳东软医疗系统有限公司 A kind of method and device for realizing therapeutic community's location verification
CN108852400B (en) * 2018-07-02 2022-02-18 东软医疗系统股份有限公司 Method and device for realizing position verification of treatment center
CN109145902B (en) * 2018-08-21 2021-09-03 武汉大学 Method for recognizing and positioning geometric identification by using generalized characteristics
CN109145902A (en) * 2018-08-21 2019-01-04 武汉大学 A method of geometry is identified and positioned using extensive feature
CN111515915A (en) * 2019-02-01 2020-08-11 株式会社迪思科 Alignment method
CN111515915B (en) * 2019-02-01 2023-09-26 株式会社迪思科 Alignment method
CN109949260A (en) * 2019-04-02 2019-06-28 晓智科技(成都)有限公司 A kind of x optical detector height adjustment progress automatic Image Stitching method
CN110689521A (en) * 2019-08-15 2020-01-14 福建自贸试验区厦门片区Manteia数据科技有限公司 Automatic identification method and system for human body part to which medical image belongs
CN112884820A (en) * 2019-11-29 2021-06-01 杭州三坛医疗科技有限公司 Method, device and equipment for training initial image registration and neural network
CN111062390A (en) * 2019-12-18 2020-04-24 北京推想科技有限公司 Region-of-interest labeling method, device, equipment and storage medium
CN111419399A (en) * 2020-03-17 2020-07-17 京东方科技集团股份有限公司 Positioning tracking piece, positioning ball identification method, storage medium and electronic device
CN113546333A (en) * 2020-07-16 2021-10-26 上海联影医疗科技股份有限公司 Isocenter calibration system and method
US11786759B2 (en) 2020-07-16 2023-10-17 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for isocenter calibration
CN113438960B (en) * 2021-04-02 2023-01-31 复旦大学附属肿瘤医院 Target disposal method and system
CN113438960A (en) * 2021-04-02 2021-09-24 复旦大学附属肿瘤医院 Target disposal method and system
CN113520426A (en) * 2021-06-28 2021-10-22 上海联影医疗科技股份有限公司 Coaxiality measuring method, medical equipment rack adjusting method, equipment and medium
CN116756045A (en) * 2023-08-14 2023-09-15 海马云(天津)信息技术有限公司 Application testing method and device, computer equipment and storage medium
CN116756045B (en) * 2023-08-14 2023-10-31 海马云(天津)信息技术有限公司 Application testing method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN107596578B (en) 2020-07-14

Similar Documents

Publication Publication Date Title
CN107596578A (en) The identification and location determining method of alignment mark, imaging device and storage medium
CN110956635B (en) Lung segment segmentation method, device, equipment and storage medium
US9684961B2 (en) Scan region determining apparatus
CN111627521B (en) Enhanced utility in radiotherapy
CN112022191B (en) Positioning method and system
CN105678746A (en) Positioning method and apparatus for the liver scope in medical image
US20120070052A1 (en) Methods for segmenting images and detecting specific structures
CN110689521A (en) Automatic identification method and system for human body part to which medical image belongs
US9763636B2 (en) Method and system for spine position detection
JP6095112B2 (en) Radiation therapy system
CN113920114B (en) Image processing method, image processing apparatus, computer device, storage medium, and program product
US20240212836A1 (en) Medical devices, methods and systems for monitoring the medical devices
CN113077433B (en) Deep learning-based tumor target area cloud detection device, system, method and medium
US20230177681A1 (en) Method for determining an ablation region based on deep learning
US20200242802A1 (en) Market Element and Application Method with ECG
KR101750173B1 (en) System and method for the automatic calculation of the effective dose
KR102409284B1 (en) Automatic Evaluation Apparatus for Tracking Tumors and Radiation Therapy System using the same
EP3777686B1 (en) Medical image processing device, medical image processing method, and program
KR20180115122A (en) Image processing apparatus and method for generating virtual x-ray image
CN112085698A (en) Method and device for automatically analyzing left and right breast ultrasonic images
CN114067994A (en) Target part orientation marking method and system
CN115880469B (en) Registration method of surface point cloud data and three-dimensional image
CN110215621B (en) Outer contour extraction method and device, treatment system and computer storage medium
Jain et al. A novel strategy for automatic localization of cephalometric landmarks
US20230398376A1 (en) Methods and systems for radiation therapy guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 201807 Shanghai City, north of the city of Jiading District Road No. 2258

Patentee after: Shanghai Lianying Medical Technology Co., Ltd

Address before: 201807 Shanghai City, north of the city of Jiading District Road No. 2258

Patentee before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd.

CP01 Change in the name or title of a patent holder