CN107644411A - Ultrasonic wide-scene imaging method and device - Google Patents

Ultrasonic wide-scene imaging method and device Download PDF

Info

Publication number
CN107644411A
CN107644411A CN201710850265.4A CN201710850265A CN107644411A CN 107644411 A CN107644411 A CN 107644411A CN 201710850265 A CN201710850265 A CN 201710850265A CN 107644411 A CN107644411 A CN 107644411A
Authority
CN
China
Prior art keywords
image
original image
point
acquisition
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710850265.4A
Other languages
Chinese (zh)
Inventor
韦华昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUHAN ZONCARE BIO-MEDICAL ELECTRONICS Co Ltd
Original Assignee
WUHAN ZONCARE BIO-MEDICAL ELECTRONICS Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUHAN ZONCARE BIO-MEDICAL ELECTRONICS Co Ltd filed Critical WUHAN ZONCARE BIO-MEDICAL ELECTRONICS Co Ltd
Priority to CN201710850265.4A priority Critical patent/CN107644411A/en
Publication of CN107644411A publication Critical patent/CN107644411A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The embodiment of the present invention provides a kind of ultrasonic wide-scene imaging method and device, belongs to image processing field.Methods described includes:Respective image characteristic point is extracted to be spliced two original image of acquisition;Based on the image characteristic point of every original image, multiple match points of acquisition two original image Corresponding matchings;Object matching is obtained from the multiple match point to point;Based on the object matching to an overlapping region for acquisition two original images;Suture is established based on the overlapping region;Two original images are carried out being spliced into ultrasonic wide-scene image by the suture.Accurate match point can be obtained by this method, make the ultrasonic wide-scene image of acquisition that ghost image be not present, improve the definition of ultrasonic wide-scene image.

Description

Ultrasonic wide-scene imaging method and device
Technical field
The present invention relates to image processing field, in particular to a kind of ultrasonic wide-scene imaging method and device.
Background technology
Medical ultrasound image, it has the advantages that hurtless measure, economic, is the prefered method of medical diagnosis on disease, wide in clinic General use.But ultrasonic imaging can be only generated the small image in the visual field, doctor can not be seen in individual ultrasonoscopy completely Organ.Using image mosaic technology, by just scanning image be subject to it is seamless spliced, to provide broader field range, so as to Beneficial to the observation and diagnosis of doctor.Under wide-scene imaging pattern, scanned forward with popping one's head in along detection zone, by spelling successively The image that former frame stitching image obtains with Current Scan is connect, finally gives a width panoramic picture.With traditional ultrasonoscopy Compare, this image has the broader visual field, can show histoorgan information of more area-of-interests etc..
Wide-scene imaging mainly includes two links of registration and splicing.Registration process is maximum using consecutive frame image correlation The characteristics of, same target area is searched in reference chart, according to the movement locus of target area, calculates two field pictures Conversion coefficient.Registration is very important link during wide-scene imaging, and registration technique is based primarily upon mutual trust so far Breath, the methods of Feature Points Matching, participate in the two images of registration only there is enough similarities can just obtain precision is high to match somebody with somebody Quasi- coefficient.
And characteristic point matching method is to seek slope according to all match points, when match point tolerance it is larger when, the matching obtained Scheme complete slope and be actually needed complete degree and there can be larger error, causing to be finally synthesizing image has very big probability Ghost image, obtain unsharp stitching image.
The content of the invention
In view of this, the purpose of the embodiment of the present invention is to provide a kind of ultrasonic wide-scene imaging method and device, to improve Above mentioned problem.
In a first aspect, the embodiments of the invention provide a kind of ultrasonic wide-scene imaging method, methods described includes:From acquisition To be spliced two original image in extract respective image characteristic point;Based on the image characteristic point of every original image, obtain Take multiple match points of two original image Corresponding matchings;Object matching pair is obtained from the multiple match point Point;Based on the object matching to an overlapping region for acquisition two original images;Established and stitched based on the overlapping region Zygonema;Two original images are carried out being spliced into ultrasonic wide-scene image by the suture.
Further, respective image characteristic point is extracted to be spliced two original image of acquisition, including:Adopt With Harris algorithms respective image characteristic point is extracted to be spliced two original image of acquisition.
Further, respective image is extracted to be spliced two original image of acquisition using Harris algorithms Characteristic point, including:For every original image, each pixel for obtaining the original image is led in the first differential of X-direction Number and first differential derivative in the Y direction;By the first differential derivative of X-direction, first differential derivative in the Y direction with Filter function phase convolution constructs Harris correlation matrixes;Characteristic value in the Harris correlation matrixes, which obtains, to be met in advance If the described image characteristic point of condition.
Further, the image characteristic point based on every original image, two original image Corresponding matchings are obtained Multiple match points, including:Based on the image characteristic point of every original image, two originals are obtained using RANSAC algorithms Multiple match points of beginning image Corresponding matching.
Further, the step of extracting respective image characteristic point to be spliced two original image of acquisition it Before, in addition to:Obtain two ultrasonoscopys of ultrasonic front-end collection;For every ultrasonoscopy, the ultrasound of acquisition is schemed The pixel for being less than the first default gray value as in is plus the second default gray value, to obtain the original graph for meeting preparatory condition Picture.
Second aspect, the embodiments of the invention provide a kind of ultrasonic wide-scene imaging device, described device includes:Feature obtains Modulus block, for extracting respective image characteristic point in be spliced two original image from acquisition;Match point obtains mould Block, for the image characteristic point based on every original image, obtain multiple matchings pair of two original image Corresponding matchings Point;Object matching is to an acquisition module, for obtaining object matching from the multiple match point to point;Obtain overlapping region Modulus block, for based on the object matching to obtaining the overlapping regions of two original images;Suture establishes mould Block, for establishing suture based on the overlapping region;Concatenation module, for original by described two by the suture Image carries out being spliced into ultrasonic wide-scene image.
Further, the feature acquisition module, specifically for using to be spliced two of the Harris algorithms from acquisition Respective image characteristic point is extracted in original image.
Further, the feature acquisition module includes:Derivation unit, for for every original image, described in acquisition First differential derivative and first differential derivative in the Y direction of each pixel of original image in X-direction;Matrix obtains Unit, for will be constructed in the first differential derivative of X-direction, first differential derivative in the Y direction and filter function phase convolution Harris correlation matrixes;Feature acquiring unit, obtained for the characteristic value in the Harris correlation matrixes and meet to preset The described image characteristic point of condition.
Further, described match point acquisition module, specifically for the image characteristic point based on every original image, Multiple match points of two original image Corresponding matchings are obtained using RANSAC algorithms.
Further, described match point acquisition module, specifically for the image characteristic point based on every original image, Multiple match points of two original image Corresponding matchings are obtained using RANSAC algorithms.
The beneficial effect of the embodiment of the present invention is:
The embodiment of the present invention provides a kind of ultrasonic wide-scene imaging method and device, first to be spliced two from acquisition Respective image characteristic point is extracted in original image, is then based on the image characteristic point of every original image, obtains described two Multiple match points of original image Corresponding matching, then object matching is obtained to point from the multiple match point, based on institute State object matching and establish suture to an overlapping region for acquisition two original images, then based on the overlapping region, lead to Cross the suture to carry out being spliced into ultrasonic wide-scene image by two original images, can be obtained accurately by this method Match point, makes the ultrasonic wide-scene image of acquisition that ghost image be not present, improves the definition of ultrasonic wide-scene image.
Other features and advantages of the present invention will illustrate in subsequent specification, also, partly become from specification It is clear that or by implementing understanding of the embodiment of the present invention.The purpose of the present invention and other advantages can be by saying what is write Specifically noted structure is realized and obtained in bright book, claims and accompanying drawing.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, it will use below required in embodiment Accompanying drawing is briefly described, it will be appreciated that the following drawings illustrate only certain embodiments of the present invention, therefore be not to be seen as It is the restriction to scope, for those of ordinary skill in the art, on the premise of not paying creative work, can be with Other related accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of flow chart of ultrasonic wide-scene imaging method provided in an embodiment of the present invention;
Fig. 2 is the flow chart of step S110 in a kind of ultrasonic wide-scene imaging method provided in an embodiment of the present invention;
Fig. 3 is a kind of application schematic diagram of ultrasonic wide-scene imaging method provided in an embodiment of the present invention;
Fig. 4 is a kind of structured flowchart of ultrasonic wide-scene imaging device provided in an embodiment of the present invention.
Embodiment
Below in conjunction with accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Ground describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.Generally exist The component of the embodiment of the present invention described and illustrated in accompanying drawing can be configured to arrange and design with a variety of herein.Cause This, the detailed description of the embodiments of the invention to providing in the accompanying drawings is not intended to limit claimed invention below Scope, but be merely representative of the present invention selected embodiment.Based on embodiments of the invention, those skilled in the art are not having There is the every other embodiment obtained on the premise of making creative work, belong to the scope of protection of the invention.
It should be noted that:Similar label and letter represents similar terms in following accompanying drawing, therefore, once a certain item exists It is defined, then it further need not be defined and explained in subsequent accompanying drawing in one accompanying drawing.Meanwhile in this hair In bright description, term " first ", " second " etc. are only used for distinguishing description, and it is not intended that indicating or implying relatively important Property.
Ultrasonic wide-scene imaging method provided in an embodiment of the present invention is applied to terminal device, and the terminal device includes this Ultrasonic wide-scene imaging device in inventive embodiments, the ultrasonic wide-scene imaging device is with the form peace of application software, or program Loaded on terminal device.
It refer to Fig. 2, Fig. 2 is a kind of flow chart of ultrasonic wide-scene imaging method provided in an embodiment of the present invention, the side Method specifically comprises the following steps:
Step S110:Respective image characteristic point is extracted to be spliced two original image of acquisition.
Two ultrasonoscopys of ultrasonic front-end collection are obtained first, for every ultrasonoscopy, by the ultrasound of acquisition The pixel for being less than the first default gray value in image meets the original of preparatory condition plus the second default gray value to obtain Image.
Because the ultrasonoscopy of ultrasonic front-end collection is partially dark, in order to preferably gather the image characteristic point of image, it is necessary to increase Add the brightness of two ultrasonoscopys,, can be by the ultrasonoscopy of acquisition as a kind of mode for every ultrasonoscopy In the pixel for being less than the first default gray value plus the second default gray value, to obtain the original image for meeting preparatory condition, For example, being less than 135 pixel in current pixel gray value, then 120 are added on the basis of current pixel value, as original The pixel value of image, if the gray value of some pixel in two ultrasonoscopys of ultrasonic front-end collection is 90, by the pixel Gray value be changed into 210 plus 120, strengthen the brightness of ultrasonoscopy with this, two after highlighting ultrasonoscopy as Original image extracts its respective image characteristic point again.
In the embodiment of the present invention, extracted each to be spliced two original image of acquisition using Harris algorithms Image characteristic point.
Fig. 3 is refer to, the step S110 includes:
Step S111:For every original image, single order of each pixel in X directions of the original image is obtained Differential derivative and first differential derivative in the Y direction.
Harris algorithms are a kind of first derivative matrix detection methods based on gradation of image, and it can not be by image The influence of rotation, illumination variation etc. is translated, the detection characteristic point that can stablize.Its thought is:For every original image, if The image function is f (x, y), if some image slices vegetarian refreshments is (x, y), takes single order of the pixel on x, y directions micro- respectively Divide derivative Ix、Iy
Step S112:To mutually it be rolled up with filter function in the first differential derivative of X-direction, first differential derivative in the Y direction Product construction Harris correlation matrixes.
Harris correlation matrixes are:
Wherein, w is filter function, is that average is 0, and variance is σ Gaussian template, For carrying out Gaussian smoothing, isolated noise point is eliminated, * is convolution. Ixy=Ix*Iy
Pixel in image is divided into three kinds, general point, marginal point, characteristic point (are also angle point, hereinafter referred to as feature Point), Harris correlation matrixes M has two characteristic values, and this matrix is positive semidefinite matrix, so two characteristic values are more than Equal to 0, these three points can be distinguished according to the size of the two characteristic values, find out characteristic point.If two characteristic values all compare Small, then the point is general point;If two characteristic values one are very big, a very little, then the point is marginal point;If two spies Value indicative is all larger, then the point is characteristic point.
Step S113:Characteristic value in the Harris correlation matrixes obtains the described image for meeting preparatory condition Characteristic point.
Harris operators are constructed, obtain a R value, one threshold value T of setting, when R values taking fixed regional area are extreme value And during more than the threshold value T set, the point is characterized a little.
Wherein, Harris operators are:R=det (M)-kxtrc (M), det (M) are determinant of a matrix, and trc (M) is Mark, k are constant, are an empirical values, and generally between 0.04-0.06, k values are smaller, and R values are bigger, the characteristic strip detected you Also it is more.
Wherein threshold value T can be using value as 5000, when R values are taking fixed regional area for extreme value and more than the threshold value of setting When 5000, the point is image characteristic point, thus method, can obtain the image characteristic point of two original images.
Step S120:Based on the image characteristic point of every original image, two original image Corresponding matchings are obtained Multiple match points.
In the present embodiment, using multiple matchings that two original image Corresponding matchings are obtained using RANSAC algorithms To point.Namely the characteristic point in two original images is matched, judges which is characterized as same in two original images Feature, same feature is matched, find out the characteristic point pair that wherein similarity is maximum, the best match as this feature To point, basic method is to determine feature descriptor according to the half-tone information of characteristic point or according to feature to be matched.
RANSAC algorithms are RANdom SAmple Consensus abbreviations, mean that random sampling is consistent.On surface The meaning is exactly the grab sample from matched sample, finds consistent sample point.RANSAC algorithms are to include abnormal number according to one group According to sample data set, calculate the mathematical model parameter of data, obtain the algorithm of effective sample data.RANSAC algorithms Core concept is exactly to take 4 characteristic points at random in the characteristic point of matching, by calculating and continuous iteration, is searched out optimal Parameter model, in this optimal models, the characteristic point that can be matched is most.
RANSAC algorithms need to find an optimal homography matrix H, and matrix size is 3 × 3.Utilize RANSAC algorithms The optimal homography matrix H found is it is required that meet that the matching characteristic point of the matrix is most.Return due to usually making h33=1 One changes matrix, so homography matrix H only has 8 unknown parameters, so at least need 8 linear equations to be solved, And correspond on dot position information, one group of characteristic matching point can obtain two linear equations, therefore at least need 4 groups of features Matching double points could solve to obtain homography matrix H.
Wherein, homography matrix H is:
Wherein, (x, y) represents the position of image characteristic point in a wherein original image, and (x', y') is expressed as another kind The position of image characteristic point in original image, s are scale parameter.
RANSAC algorithms are concentrated from matched data and extract 4 samples out at random, and ensure not conllinear between this four samples. Then its homography matrix is calculated, then using all data of this model measurement, and calculates and meets this pattern number strong point Number and projection error (i.e. cost function).If this model is optimal models, corresponding cost function is minimum, calculates generation The formula of valency function is as follows:
First, 4 sample datas are extracted out at random from the data set of Feature Points Matching at random (between this four samples not Can be conllinear), then calculate transformation matrix H;Model M is designated as, is then calculated in data set so the projection of data and model M Error, if the projection error of data is less than the threshold value of setting, by point set I in data addition;If point set I members in current The number of element is more than optimal interior point set I_best, then updates I_best=I, while updates iterations k, wherein,P is confidence level, typically takes the ratio that 0.995, w is interior point, and m is the minimum sample needed for computation model This number, it is 4;If the iterations calculated is more than k (being calculated by I_best), exit;Otherwise iterations adds 1, And repeat the above steps.
It should be noted that iterations, in the case of no more than maximum iteration k, iterations adds 1.Once More than maximum iteration k, calculate and stop, iteration terminates.Wherein it should be noted that maximum iteration k is by optimal interior Point set I_best is determined that, once optimal interior point set changes, maximum iteration k will change.
According to the method described above, so as to obtaining multiple matching double points.
Step S130:Object matching is obtained from the multiple match point to point.
Because ultrasonic probe slowly moves, the spacing difference between two original images is little, each pair match point The distance between difference should be no more than 150 pixels, so two original images can be with successful stitch.In order that obtain two Ghost image is not present in original image, also needs to reject the error dot of matching again, and therefrom selects proper fit to a conduct target Match point.
For example, after multiple match points are obtained by above-mentioned steps, the distance between each pair match point is calculated, Object matching is therefrom chosen to point, for example, it is assumed that obtaining has three pairs of match point A (a1, a2), B (b1, b2), C (c1, c2), If A match point is X1 the distance between in the x direction, B match point is X2 the distance between in the x direction, C matchings pair Point is the distance between in the x direction X3, if X1>X2>X3, then take B match point as object matching to point.If in addition, There is even-even match point, if any four pairs of match point A (a1, a2), B (b1, b2), C (c1, c2), D (d1, d2), A matching pair Point is the distance between in the x direction X1, and B match point is the distance between in the x direction X2, and C match point is in x directions The distance between upper is X3, and D match point is the distance between in the x direction X4, if X1>X2>X3>X4, then take X2 and X3 Average, if X2 and the average are closer, go B match point for object matching to point.
Step S140:Based on the object matching to an overlapping region for acquisition two original images.
After object matching is obtained to point, two original images are spliced, using optimal match point as core adjustment two The deviation up and down of original image one of which original image is opened, as shown in figure 4, stain therein is object matching pair Point, overlapping region are the intersection region of two original images, can obtain the size of the overlapping region.
Step S150:Suture is established based on the overlapping region.
The lap of two original images has shade, referred to as ghost, is due to that same Partial Feature is overlapping and and make Image thickens, so, in order to eliminate ghost, the overlapping region after two original image registrations is divided into two parts, it is each The appropriate section of the corresponding original image in part, after a preferable suture of the definition with following feature is to registration Overlapping region split:
(1) on color intensity, it is desirable to which the difference of color value of the pixel on two original images on suture is most It is small;
(2) structure of the pixel on suture on two original images is most like.
But because the image of reality hardly results in while meets the suture of above-mentioned two requirement, so needing to find Meet the optimal stitching line of above-mentioned two condition, by the analysis and experiment to image, show that optimal stitching line solves criterion:
E (x, y)=Ecolor(x,y)2+Egeometry(x,y)
Wherein, EcolorRepresent the difference of the color value of superposition image vegetarian refreshments on two original images, i.e. face of the object matching to point The difference of colour, EgeometryRepresent that object matching is to the structure difference of point, E on two original imagesgeometryIt is by changing ladder Degree calculates what Sobel operators were realized, when carrying out gradient calculation using Sobel operators, calculates in the gradient in x directions and y directions point Cai Yong not template:
With
Obtained assuming that two original images are f1 and f2 in the product of the difference of the gradient in x and y directions.
According to this criterion, difference operation is made into the overlapping region of two original images and generates a width error image, then to this Error image, from the first row of overlapping region, is established using each pixel on the row as starting point with the thought of Dynamic Programming Suture, finally from these sutures find an optimal suture, comprise the following steps that:
(1) initialize:Each row pixel of the first row corresponds to a suture, and its intensity level is initialized as the standard of each point Then it is worth, the current point of the suture is the row where it.
(2) extend:The a line for being computed suturing line strength extends downwards, to the last untill a line.The side of extension Method is compared with the current point of each suture is added with 3 pixel criterion values in next line, to take minimal intensity value Propagation direction of one of this 3 pixels of corresponding next line as the suture, the intensity level of this suture is updated for most Small intensity value, and the current point of suture is updated to obtain to the adjacent pixels value institute in the next line where minimal intensity value Next line in adjacent pixels value where row.
(3) optimal stitching line is selected:The minimum optimal stitching line the most of selection intensity value from all sutures.
Step S160:Two original images are carried out being spliced into ultrasonic wide-scene image by the suture.
Optimal stitching line according to being obtained in above-mentioned steps S150 takes the value of the right and left image of the suture to synthesize one Width ultrasonic wide-scene image.
Fig. 4 is refer to, Fig. 4 is a kind of structured flowchart of ultrasonic wide-scene imaging device 200 provided in an embodiment of the present invention, Described device includes:
Feature acquisition module 210, for extracting respective characteristics of image in be spliced two original image from acquisition Point.
Match point acquisition module 220, for the image characteristic point based on every original image, obtain two originals Multiple match points of beginning image Corresponding matching.
Object matching is to an acquisition module 230, for obtaining object matching from the multiple match point to point.
Overlapping region acquisition module 240, for based on the object matching to obtaining the weights of two original images Folded region.
Suture establishes module 250, for establishing suture based on the overlapping region.
Concatenation module 260, for carrying out being spliced into ultrasonic wide-scene figure by two original images by the suture Picture.
The feature acquisition module 210, specifically for using to be spliced two original graph of the Harris algorithms from acquisition Respective image characteristic point is extracted as in.
The feature acquisition module 210 includes:
Derivation unit, for for every original image, obtaining each pixel of the original image in X-direction First differential derivative and first differential derivative in the Y direction.
Matrix acquiring unit, for by the first differential derivative of X-direction, first differential derivative in the Y direction and filtering Function phase convolution constructs Harris correlation matrixes.
Feature acquiring unit, obtained for the characteristic value in the Harris correlation matrixes and meet preparatory condition Described image characteristic point.
Described match point acquisition module 220, specifically for the image characteristic point based on every original image, use RANSAC algorithms obtain multiple match points of two original image Corresponding matchings.
Described device also includes:
Acquisition module, for obtaining two ultrasonoscopys of ultrasonic front-end collection;
Original image acquisition module, for for every ultrasonoscopy, the will be less than in the ultrasonoscopy of acquisition The pixel of one default gray value is plus the second default gray value, to obtain the original image for meeting preparatory condition.
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description Specific work process, may be referred to the corresponding process in preceding method, no longer excessively repeat herein.
In summary, the embodiment of the present invention provides a kind of ultrasonic wide-scene imaging method and device, waits to spell from acquisition first Respective image characteristic point is extracted in two original images connect, is then based on the image characteristic point of every original image, is obtained Multiple match points of two original image Corresponding matchings, then object matching pair is obtained from the multiple match point Point, based on the object matching to an overlapping region for acquisition two original images, then based on overlapping region foundation Suture, two original images are carried out being spliced into ultrasonic wide-scene image by the suture, can be obtained by this method An accurately match point is obtained, makes the ultrasonic wide-scene image of acquisition that ghost image be not present, improves the definition of ultrasonic wide-scene image.
In several embodiments provided herein, it should be understood that disclosed apparatus and method, can also lead to Other modes are crossed to realize.Device embodiment described above is only schematical, for example, the flow chart in accompanying drawing and Block diagram shows the system in the cards of the device of multiple embodiments according to the present invention, method and computer program product Framework, function and operation.At this point, each square frame in flow chart or block diagram can represent a module, program segment or generation A part for code, a part for the module, program segment or code include one or more and are used to realize defined logic function Executable instruction.It should also be noted that at some as in the implementation replaced, the function of being marked in square frame can also To occur different from the order marked in accompanying drawing.For example, two continuous square frames can essentially perform substantially in parallel, They can also be performed in the opposite order sometimes, and this is depending on involved function.It is also noted that block diagram and/or stream The combination of each square frame and block diagram in journey figure and/or the square frame in flow chart, function or dynamic as defined in performing can be used The special hardware based system made is realized, or can be realized with the combination of specialized hardware and computer instruction.
In addition, each functional module in each embodiment of the present invention can integrate to form an independent portion Point or modules individualism, can also two or more modules be integrated to form an independent part.
If the function is realized in the form of software function module and is used as independent production marketing or in use, can To be stored in a computer read/write memory medium.Based on such understanding, technical scheme substantially or Say that the part of the part to be contributed to prior art or the technical scheme can be embodied in the form of software product, The computer software product is stored in a storage medium, including some instructions are causing a computer equipment (can be with It is personal computer, server, or network equipment etc.) perform all or part of each embodiment methods described of the present invention Step.And foregoing storage medium includes:It is USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), random Access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can be with Jie of store program codes Matter.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, for the skill of this area For art personnel, the present invention can have various modifications and variations.Within the spirit and principles of the invention, that is made is any Modification, equivalent substitution, improvement etc., should be included in the scope of the protection.It should be noted that:Similar label and word Mother represents similar terms in following accompanying drawing, therefore, once it is defined in a certain Xiang Yi accompanying drawing, then in subsequent accompanying drawing In it further need not be defined and explained.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, and appoints What those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, all should It is included within the scope of the present invention.Therefore, protection scope of the present invention should it is described using scope of the claims as It is accurate.
It should be noted that herein, such as first and second or the like relational terms are used merely to a reality Body or operation make a distinction with another entity or operation, and not necessarily require or imply between these entities or operation Any this actual relation or order be present.Moreover, term " comprising ", "comprising" or its any other variant are intended to Cover including for nonexcludability, so that process, method, article or equipment including a series of elements not only include that A little key elements, but also the other element including being not expressly set out, or also include for this process, method, article or The intrinsic key element of person's equipment.In the absence of more restrictions, the key element limited by sentence "including a ...", and It is not precluded within the process including the key element, method, article or equipment and other identical element is also present.

Claims (10)

  1. A kind of 1. ultrasonic wide-scene imaging method, it is characterised in that methods described includes:
    Respective image characteristic point is extracted to be spliced two original image of acquisition;
    Based on the image characteristic point of every original image, multiple match points of acquisition two original image Corresponding matchings;
    Object matching is obtained from the multiple match point to point;
    Based on the object matching to an overlapping region for acquisition two original images;
    Suture is established based on the overlapping region;
    Two original images are carried out being spliced into ultrasonic wide-scene image by the suture.
  2. 2. according to the method for claim 1, it is characterised in that extracted to be spliced two original image of acquisition each From image characteristic point, including:
    Respective image characteristic point is extracted to be spliced two original image of acquisition using Harris algorithms.
  3. 3. according to the method for claim 2, it is characterised in that to be spliced two original using Harris algorithms from acquisition Respective image characteristic point is extracted in beginning image, including:
    For every original image, each pixel of the original image is obtained in the first differential derivative of X-direction and in Y The first differential derivative in direction;
    Harris phases will be constructed in the first differential derivative of X-direction, first differential derivative in the Y direction and filter function phase convolution Close matrix;
    Characteristic value in the Harris correlation matrixes obtains the described image characteristic point for meeting preparatory condition.
  4. 4. according to the method for claim 1, it is characterised in that the image characteristic point based on every original image, obtain institute Multiple match points of two original image Corresponding matchings are stated, including:
    Based on the image characteristic point of every original image, two original image Corresponding matchings are obtained using RANSAC algorithms Multiple match points.
  5. 5. according to the method for claim 1, it is characterised in that extracted to be spliced two original image of acquisition each From image characteristic point the step of before, in addition to:
    Obtain two ultrasonoscopys of ultrasonic front-end collection;
    For every ultrasonoscopy, the pixel for being less than the first default gray value in the ultrasonoscopy of acquisition is added second Default gray value, to obtain the original image for meeting preparatory condition.
  6. 6. a kind of ultrasonic wide-scene imaging device, it is characterised in that described device includes:
    Feature acquisition module, for extracting respective image characteristic point in be spliced two original image from acquisition;
    Match point acquisition module, for the image characteristic point based on every original image, obtain two original images pair The multiple match points that should be matched;
    Object matching is to an acquisition module, for obtaining object matching from the multiple match point to point;
    Overlapping region acquisition module, for based on the object matching to obtaining the overlapping regions of two original images;
    Suture establishes module, for establishing suture based on the overlapping region;
    Concatenation module, for carrying out being spliced into ultrasonic wide-scene image by two original images by the suture.
  7. 7. device according to claim 6, it is characterised in that the feature acquisition module, specifically for using Harris Algorithm extracts respective image characteristic point to be spliced two original image of acquisition.
  8. 8. device according to claim 7, it is characterised in that the feature acquisition module includes:
    Derivation unit, for for every original image, each pixel for obtaining the original image to be micro- in the single order of X-direction Point derivative and first differential derivative in the Y direction;
    Matrix acquiring unit, for by the first differential derivative and filter function in the first differential derivative of X-direction, in the Y direction Phase convolution constructs Harris correlation matrixes;
    Feature acquiring unit, the figure for meeting preparatory condition is obtained for the characteristic value in the Harris correlation matrixes As characteristic point.
  9. 9. device according to claim 6, it is characterised in that described match point acquisition module, specifically for based on every The image characteristic point of original image is opened, multiple matchings pair of two original image Corresponding matchings are obtained using RANSAC algorithms Point.
  10. 10. device according to claim 6, it is characterised in that described device also includes:
    Acquisition module, for obtaining two ultrasonoscopys of ultrasonic front-end collection;
    Original image acquisition module, for for every ultrasonoscopy, will be in the ultrasonoscopy of acquisition to be less than first pre- If the pixel of gray value is plus the second default gray value, to obtain the original image for meeting preparatory condition.
CN201710850265.4A 2017-09-19 2017-09-19 Ultrasonic wide-scene imaging method and device Pending CN107644411A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710850265.4A CN107644411A (en) 2017-09-19 2017-09-19 Ultrasonic wide-scene imaging method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710850265.4A CN107644411A (en) 2017-09-19 2017-09-19 Ultrasonic wide-scene imaging method and device

Publications (1)

Publication Number Publication Date
CN107644411A true CN107644411A (en) 2018-01-30

Family

ID=61113905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710850265.4A Pending CN107644411A (en) 2017-09-19 2017-09-19 Ultrasonic wide-scene imaging method and device

Country Status (1)

Country Link
CN (1) CN107644411A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765277A (en) * 2018-06-04 2018-11-06 上海联影医疗科技有限公司 Image split-joint method, device, computer equipment and storage medium
CN109745073A (en) * 2019-01-10 2019-05-14 武汉中旗生物医疗电子有限公司 The two-dimentional matching process and equipment of elastogram displacement
WO2019233422A1 (en) * 2018-06-04 2019-12-12 Shanghai United Imaging Healthcare Co., Ltd. Devices, systems, and methods for image stitching
CN111275617A (en) * 2020-01-09 2020-06-12 云南大学 Automatic splicing method and system for ABUS breast ultrasound panorama and storage medium
CN111553870A (en) * 2020-07-13 2020-08-18 成都中轨轨道设备有限公司 Image processing method based on distributed system
CN112164000A (en) * 2020-09-28 2021-01-01 深圳华声医疗技术股份有限公司 Image storage method and device for ultrasonic panoramic imaging
CN112308782A (en) * 2020-11-27 2021-02-02 深圳开立生物医疗科技股份有限公司 Panoramic image splicing method and device, ultrasonic equipment and storage medium
CN113689332A (en) * 2021-08-23 2021-11-23 河北工业大学 Image splicing method with high robustness under high repetition characteristic scene
CN118037999A (en) * 2024-04-10 2024-05-14 时代新媒体出版社有限责任公司 Interactive scene construction method and system based on VR thinking teaching

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101110122A (en) * 2007-08-31 2008-01-23 北京工业大学 Large cultural heritage picture pattern split-joint method based on characteristic
CN101556692A (en) * 2008-04-09 2009-10-14 西安盛泽电子有限公司 Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points
CN101853524A (en) * 2010-05-13 2010-10-06 北京农业信息技术研究中心 Method for generating corn ear panoramic image by using image sequence
CN102927448A (en) * 2012-09-25 2013-02-13 北京声迅电子股份有限公司 Undamaged detection method for pipeline
CN103530844A (en) * 2013-09-17 2014-01-22 上海皓信生物科技有限公司 Splicing method based on mycobacterium tuberculosis acid-fast staining image
CN104166972A (en) * 2013-05-17 2014-11-26 中兴通讯股份有限公司 Terminal and method for realizing image processing
CN105608689A (en) * 2014-11-20 2016-05-25 深圳英飞拓科技股份有限公司 Method and device for eliminating image feature mismatching for panoramic stitching
CN105957007A (en) * 2016-05-05 2016-09-21 电子科技大学 Image stitching method based on characteristic point plane similarity
CN106204437A (en) * 2016-06-28 2016-12-07 深圳市凌云视迅科技有限责任公司 A kind of image interfusion method
WO2017113818A1 (en) * 2015-12-31 2017-07-06 深圳市道通智能航空技术有限公司 Unmanned aerial vehicle and panoramic image stitching method, device and system thereof

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101110122A (en) * 2007-08-31 2008-01-23 北京工业大学 Large cultural heritage picture pattern split-joint method based on characteristic
CN101556692A (en) * 2008-04-09 2009-10-14 西安盛泽电子有限公司 Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points
CN101853524A (en) * 2010-05-13 2010-10-06 北京农业信息技术研究中心 Method for generating corn ear panoramic image by using image sequence
CN102927448A (en) * 2012-09-25 2013-02-13 北京声迅电子股份有限公司 Undamaged detection method for pipeline
CN104166972A (en) * 2013-05-17 2014-11-26 中兴通讯股份有限公司 Terminal and method for realizing image processing
CN103530844A (en) * 2013-09-17 2014-01-22 上海皓信生物科技有限公司 Splicing method based on mycobacterium tuberculosis acid-fast staining image
CN105608689A (en) * 2014-11-20 2016-05-25 深圳英飞拓科技股份有限公司 Method and device for eliminating image feature mismatching for panoramic stitching
WO2017113818A1 (en) * 2015-12-31 2017-07-06 深圳市道通智能航空技术有限公司 Unmanned aerial vehicle and panoramic image stitching method, device and system thereof
CN105957007A (en) * 2016-05-05 2016-09-21 电子科技大学 Image stitching method based on characteristic point plane similarity
CN106204437A (en) * 2016-06-28 2016-12-07 深圳市凌云视迅科技有限责任公司 A kind of image interfusion method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
党建武 等: ""基于SIFT特征检测的图像拼接优化算法研究"", 《计算机应用研究》 *
杨萍: ""基于图像拼接的车载全视角观测器研究与实现"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
林学晶: ""视频图像拼接技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019233422A1 (en) * 2018-06-04 2019-12-12 Shanghai United Imaging Healthcare Co., Ltd. Devices, systems, and methods for image stitching
CN108765277A (en) * 2018-06-04 2018-11-06 上海联影医疗科技有限公司 Image split-joint method, device, computer equipment and storage medium
US11763424B2 (en) 2018-06-04 2023-09-19 Shanghai United Imaging Healthcare Co., Ltd. Devices, systems, and methods for image stitching
CN108765277B (en) * 2018-06-04 2021-05-07 上海联影医疗科技股份有限公司 Image splicing method and device, computer equipment and storage medium
CN109745073A (en) * 2019-01-10 2019-05-14 武汉中旗生物医疗电子有限公司 The two-dimentional matching process and equipment of elastogram displacement
CN111275617B (en) * 2020-01-09 2023-04-07 云南大学 Automatic splicing method and system for ABUS breast ultrasound panorama and storage medium
CN111275617A (en) * 2020-01-09 2020-06-12 云南大学 Automatic splicing method and system for ABUS breast ultrasound panorama and storage medium
CN111553870A (en) * 2020-07-13 2020-08-18 成都中轨轨道设备有限公司 Image processing method based on distributed system
CN111553870B (en) * 2020-07-13 2020-10-16 成都中轨轨道设备有限公司 Image processing method based on distributed system
CN112164000A (en) * 2020-09-28 2021-01-01 深圳华声医疗技术股份有限公司 Image storage method and device for ultrasonic panoramic imaging
CN112308782A (en) * 2020-11-27 2021-02-02 深圳开立生物医疗科技股份有限公司 Panoramic image splicing method and device, ultrasonic equipment and storage medium
CN113689332A (en) * 2021-08-23 2021-11-23 河北工业大学 Image splicing method with high robustness under high repetition characteristic scene
CN118037999A (en) * 2024-04-10 2024-05-14 时代新媒体出版社有限责任公司 Interactive scene construction method and system based on VR thinking teaching

Similar Documents

Publication Publication Date Title
CN107644411A (en) Ultrasonic wide-scene imaging method and device
CN110197493B (en) Fundus image blood vessel segmentation method
Hernandez-Matas et al. FIRE: fundus image registration dataset
Chan et al. Texture-map-based branch-collaborative network for oral cancer detection
US8194936B2 (en) Optimal registration of multiple deformed images using a physical model of the imaging distortion
US20220157047A1 (en) Feature Point Detection
CN109523535B (en) Pretreatment method of lesion image
DE102007046582A1 (en) System and method for segmenting chambers of a heart in a three-dimensional image
Legg et al. Feature neighbourhood mutual information for multi-modal image registration: an application to eye fundus imaging
DE102006054822A1 (en) Virtual biological object`s e.g. colon, characteristics paths e.g. prone position, regulating method for e.g. angioscopy, involves measuring correlation between object paths by minimizing energy function that includes error and switch terms
CN112164043A (en) Method and system for splicing multiple fundus images
CN105979847A (en) Endoscopic image diagnosis support system
Choe et al. Optimal global mosaic generation from retinal images
CN110729045A (en) Tongue image segmentation method based on context-aware residual error network
CN111488912B (en) Laryngeal disease diagnosis system based on deep learning neural network
CN113239755B (en) Medical hyperspectral image classification method based on space-spectrum fusion deep learning
Bourbakis Detecting abnormal patterns in WCE images
Kajihara et al. Non-rigid registration of serial section images by blending transforms for 3D reconstruction
JP4274400B2 (en) Image registration method and apparatus
CN112102385A (en) Multi-modal liver magnetic resonance image registration system based on deep learning
CN109949288A (en) Tumor type determines system, method and storage medium
US9147250B2 (en) System and method for automatic magnetic resonance volume composition and normalization
Al Khalil et al. Late fusion U-Net with GAN-based augmentation for generalizable cardiac MRI segmentation
TWI572186B (en) Adaptive Inpainting for Removal of Specular Reflection in Endoscopic Images
CN113643263A (en) Identification method and system for upper limb bone positioning and forearm bone fusion deformity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180130

RJ01 Rejection of invention patent application after publication