CN105869122A - Image processing method and apparatus - Google Patents
Image processing method and apparatus Download PDFInfo
- Publication number
- CN105869122A CN105869122A CN201510824471.9A CN201510824471A CN105869122A CN 105869122 A CN105869122 A CN 105869122A CN 201510824471 A CN201510824471 A CN 201510824471A CN 105869122 A CN105869122 A CN 105869122A
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- marginal point
- edge
- connected domain
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 16
- 238000000034 method Methods 0.000 claims abstract description 15
- 238000002372 labelling Methods 0.000 claims description 48
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
Classifications
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Abstract
Embodiments of the invention provide an image processing method and apparatus. The method comprises the steps of obtaining an edge feature image of a target image; performing connected domain marking on the edge feature image; for each of a plurality of marked connected domains, judging whether the number of edge points of the connected domain is greater than a first preset value or not; and when it is judged that the number of the edge points of the connected domain is less than the first preset value, setting each edge point of the connected domain as a background point. According to the image processing method provided by the invention, the extracted edge feature image is corrected with a connected domain method, so that point-like noises or linear noises in the edge feature image can be effectively removed and a more accurate image edge can be obtained.
Description
Technical field
The present embodiments relate to video technique field, particularly relate to a kind of image processing method and device.
Background technology
In actual image processing problem, the edge feature of image as a kind of basic feature of image,
It is frequently applied the feature description of higher level, image recognition, image segmentation, image enhaucament and figure
In the image procossing of picture compression etc. and analytical technology, thus can and understanding for further analysis to image.
Generally in Image Acquisition, transmission and processing procedure, always it is inevitably present various noise,
And the frequency band of noise and image border mixes, this makes the edge feature image extracted the most accurate
Really.
Summary of the invention
Embodiments provide a kind of image processing method and device, the image limit got with raising
The accuracy of edge.
Embodiments provide a kind of image processing method, including:
Obtain the edge feature image of target image;
Described edge feature image is carried out connected component labeling;
For each connected domain in multiple connected domains of institute's labelling, it is judged that the edge that this connected domain has
Whether the quantity of point is more than the first preset value;If it is not, each marginal point this connected domain being had all is put
For background dot.
Second aspect, embodiments provides a kind of image processing apparatus, including:
Acquisition module, for obtaining the edge feature image of target image;
Mark module, for carrying out connected component labeling to described edge feature image;
Correcting module, for for each connection in the multiple connected domains in described edge feature image
Territory, it is judged that whether the quantity of the marginal point that this connected domain has is more than the first preset value;If it is not, by this even
Each marginal point that logical territory is had all is set to background dot.
In the image processing method of present invention offer and device, the edge feature image extracted is used connection
The method in logical territory is modified, and can effectively remove spotted noise therein or wire noise, thus
Obtain image border the most accurately.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to reality
Execute the required accompanying drawing used in example or description of the prior art to be briefly described, it should be apparent that under,
Accompanying drawing during face describes is some embodiments of the present invention, for those of ordinary skill in the art,
On the premise of not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
The schematic flow sheet of a kind of image processing method that Fig. 1 provides for the present invention;
Fig. 2 and Fig. 3 shows the dot structure figure in edge feature image;
Fig. 4 shows the edge feature image that a width is possible;
Fig. 5 is the flow chart of a kind of possible embodiment of step S13 in Fig. 1;
A kind of schematic flow sheet of the possible embodiment of the image processing method that Fig. 6 provides for the present invention;
Fig. 7 shows the 4 secondary initial edge characteristic images in station symbol region;
Fig. 8 shows the edge feature image obtained after step S62;
Fig. 9 shows the edge feature image obtained after step S65;
The structural representation of a kind of image processing apparatus that Figure 10 provides for the present invention.
Detailed description of the invention
For making the purpose of the embodiment of the present invention, technical scheme and advantage clearer, below in conjunction with this
Accompanying drawing in bright embodiment, is clearly and completely described the technical scheme in the embodiment of the present invention,
Obviously, described embodiment is a part of embodiment of the present invention rather than whole embodiments.Based on
Embodiment in the present invention, those of ordinary skill in the art are obtained under not making creative work premise
The every other embodiment obtained, broadly falls into the scope of protection of the invention.
See Fig. 1, the schematic flow sheet of a kind of image processing method provided for the present invention, including:
Step S11, obtains the edge feature image of target image;
Step S12, carries out connected component labeling to described edge feature image;
Step S13, for each connected domain in multiple connected domains of institute's labelling, it is judged that this connected domain has
Whether the quantity of some marginal points is more than the first preset value;If it is not, each limit that this connected domain is had
Edge point is all set to background dot.
In the image processing method that the present invention provides, edge characteristic image is carried out connected component labeling, and will
The quantity of the marginal point having all is set to background dot less than the marginal point of preset value.So can be good at filter
Except having the less connected domain of marginal point, such as spotted noise and wire noise.
In the specific implementation, the above-mentioned target image in step S11 may refer to any type of image,
The image etc. of the image in station symbol region, people or other objects in such as TV image.
In the specific implementation, step S11 here can be accomplished in several ways, and such as uses existing skill
Canny algorithm in art or the edge feature image of LoG algorithm extraction target image.
Understandable, the edge feature image obtained after step S11 is generally binaryzation matrix,
The gray value of a portion pixel is 255, and the gray value of another part pixel is 0;Marginal point can
Think the pixel that gray value is 255, and background dot can be gray value is the pixel of 0.
As the optional mode of one, in the present invention, above-mentioned step S11 can include not shown in figure:
Step S111, obtains the predetermined number width initial edge characteristic image that described target image is corresponding;
Step S112, for each in multiple marginal points that each width initial edge characteristic image is comprised
Marginal point, determines that the same coordinate position in this marginal point each width initial edge characteristic image occurs secondary
Whether number is more than the second preset value, if it is not, this marginal point is set to background dot;
Step S113, merges remaining marginal point in each width initial edge characteristic image and obtains target
The edge feature image of image.
In actual applications, if at multi-picture (such as in continuous print multiframe picture), an edge
The number of times that point occurs is less, occurs the most wherein, the most substantially may be used in a width picture or two width pictures
To determine that this marginal point is as noise spot.After so by above-mentioned step S111-step S113, initial edge
In characteristic image, such noise spot can significantly reduce.So on the one hand, can improve and finally to get
The accuracy of image border, on the other hand reduces (the connection that some noises are corresponding due to the quantity of connected domain
Territory is deleted in step S111-step S113), it is possible to reduce and perform required for step S12 and step S13
Resource.
In the specific implementation, in above-mentioned step S111, it is also possible to by edge extracting of the prior art
Algorithm realizes.The most understandable, in step S111, corresponding each of acquired target image
In width initial edge characteristic image, the position of target image should be fixing, such as goal image
Can be the image in above-mentioned station symbol region, in each width picture, the most stable the going out of image in station symbol region
The upper left corner of present each width picture.
In the specific implementation, in above-mentioned step S112, the number of the initial edge characteristic image of required acquisition
Amount can arbitrarily set, and corresponding second preset value can be according to acquired initial edge characteristic image
Quantity determines.The such as quantity of initial edge characteristic image is 4, and the second the most above-mentioned preset value can be 2.
If the number of times that marginal point occurs at the same coordinate of each width picture is less than 2 times, then by this edge
Point is set to background dot.
In the specific implementation, above-mentioned step S12 can perform as follows:
From first direction to second direction, to fourth direction, each pixel is carried out first from third direction
Secondary connected component labeling;From second direction to first direction, from fourth direction to third direction to connecting for the first time
Each pixel after logical field mark carries out second time connected component labeling;Described first direction is left or right
In one, second direction is another in left or right;Described third direction is one in up or down,
Second direction is another in up or down.
Advantage of this is that, it is possible to make the labelling to connected domain the most accurate.Avoid single labelling institute
The situation by two different connected domains marks of same connected component labeling caused.
Specifically, the process of connected component labeling can be each time:
It is marginal point at a pixel, and each pixel adjacent with this pixel has is marked
During the marginal point remembered, the connected component labeling of this pixel is labeled by this marginal point being labeled
Connected domain mark;It is marginal point at a pixel, and in each pixel adjacent with this pixel
When there is no labeled marginal point, this pixel is labeled as a new connected domain mark.
Understandable, when a pixel is marginal point, and each pixel adjacent with this pixel
Each marginal point existed in point in marginal point, and each adjacent pixel is not the most labeled, then may be used
With by this pixel and each marginal point labelling one new connected domain mark adjacent with this pixel.
And if each pixel adjacent with this pixel not existing marginal point, only by this pixel labelling one
Individual new connected domain mark.
In the specific implementation, each adjacent with pixel here pixel may refer to and this pixel
Adjacent 8 pixels of point, see Fig. 2, for a pixel P (x, y), its each adjacent picture
Vegetarian refreshments be respectively P4 (x-1, y-1), P3 (x-1, y), P2 (x-1, y+1), P5 (x, y-1),
P1 (x, y+1), and P6 (x+1, y-1), P7 (x+1, y), P8 (x+1, y+1);Or,
May also mean that 4 adjacent with this pixel pixel, see Fig. 3, for a pixel P (x, y),
Its each adjacent pixel be respectively P2 (x-1, y), P3 (x-1, y+1), P (x, y-1),
P1 (x, y+1), P4 (x+1, y).
When carrying out connected component labeling, connected component labeling can be carried out from the top down the most from left to right,
The initial value of connected domain mark is 1, scans a new connected domain the most each time, then
Adding 1 by this connected domain mark, the connected domain as this new connected domain identifies, until all of connected domain quilt
Labelling.Carry out for the first time after connected component labeling, it is understood that there may be a problem be to should be same
Two regions of individual connected domain marked a connected domain mark respectively, such as the edge feature in Fig. 4
Image, the first stroke of " civilian " therein word and second should be same connected domain, but according to above-mentioned
Connected component labeling method, for " civilian " word second is in the most upper left marginal point, its left,
Top, upper left pixel are background dot, and therefore these pixels the most do not have connected domain mark,
And lower left, right, lower section, lower right, each pixel top-right are not yet scanned be there will not be
Connected domain identifies, and so can cause this most upper left marginal point is labeled a new connected domain mark,
This connected domain mark is different from the connected domain of the first stroke of " civilian " word.Thus will be actually and belong to one
Two connected domains marks of two zone markers of individual connected domain, the company corresponding to each connected domain mark
In logical territory, the quantity of marginal point significantly reduces, and the marginal point that so may result in this connected domain is set to
Background dot.In order to avoid this phenomenon, the most reverse connected domain mark can be carried out in a manner mentioned above
Record a demerit journey, should during, the connected domain mark of the pixel at word second upper left side can be modified
By the connected domain mark being labeled with the first stroke of " civilian " word.
In the specific implementation, the first preset value in above-mentioned step S3 can be according to edge feature image
Number of pixels sets, such as when above-mentioned edge feature image comprises 450*180 pixel, and can be by
The first above-mentioned preset value is set to 5, the connected domain less than 5 marginal points is removed.
In the specific implementation, in step s 13, if the quantity of marginal point that connected domain has is more than
First preset value, then retain each marginal point in this connected domain.
In the specific implementation, above-mentioned step S13 can realize especially by such as the mode in Fig. 5:
Step S131, quantity N of statistics connected domain mark, and corresponding to each connected domain mark
The quantity (assuming that i-th is expressed as ni) of the marginal point in connected domain;
Step S132, sets the initial value of sequence number i as 1;
Step S133, it is judged that i whether less than N+1, the most then turns to step S134;If it is not, then turn to
Step S138;
Step S134, it is judged that whether marginal point quantity ni of the connected domain that sequence number i is corresponding is less than the first preset value
T.If it is, turn to step S135;Otherwise turn to step S136;
Step S135, all marginal points deleting sequence number i correspondence connected domain (will be set to the back of the body by these marginal points
Sight spot), rear steering step S136;
Step S136, retains all marginal points of sequence number i correspondence connected domain;
Step S137, makes i=i+1, rear steering step S133;
Step S138, exports edge feature image.
The image processing method provided the present invention below in conjunction with concrete scene is specifically described, it is assumed that
Target image to be dealt with is station symbol area image;And assume that above-mentioned predetermined number is 4, second presets
Value is 3;First preset value is 5;
Now, seeing Fig. 6, the optional embodiment of one of the image processing method that the present invention provides can wrap
Include following flow process:
4 these station symbol area images secondary are carried out Edge Gradient Feature and obtain this station symbol region by step S61 respectively
The 4 secondary initial edge characteristic images that image is corresponding.Assume that the 4 secondary initial edge obtained after step S61 are special
Levying image is each width image shown in Fig. 7.
4 secondary initial edge characteristic images are synthesized the edge spy obtaining station symbol area image by step S62
Levy image.The edge feature image of the target image obtained after step S62 is referred to Fig. 8.
Step S63, the edge feature image obtaining step S62 carries out connected component labeling.
Step S64, for the connected domain that each is labeled, determines the quantity of marginal point in this connected domain
Whether less than 5;And when being judged as YES, delete each marginal point in this connected domain.
Step S65, the edge feature image obtained after output step S64.
The edge feature image that edge feature image in Fig. 8 obtains after step S65 processes is referred to
Fig. 9, it can be seen that compared with each breadths edge characteristic image in Fig. 7, the edge feature image in Fig. 9 is more
For clearly, and major part noise spot therein is removed.
Based on identical design, present invention also offers a kind of image processing apparatus, may be used for performing
Stating the image processing method described in any one, see Figure 10, this device may include that
Acquisition module 1011, for obtaining the edge feature image of target image;
Mark module 1012, for carrying out connected component labeling to described edge feature image;
Correcting module 1013, for for each in the multiple connected domains in described edge feature image
Connected domain, it is judged that whether the quantity of the marginal point that this connected domain has is more than the first preset value;If it is not, will
Each marginal point that this connected domain is had all is set to background dot.
Further, acquisition module 1011 here can be corresponding specifically for obtaining described target image
Predetermined number width initial edge characteristic image;The multiple limits comprised for each width initial edge characteristic image
Each marginal point in edge point, it is judged that the same seat in this marginal point each width initial edge characteristic image
Whether the number of times occurred at cursor position is more than the second preset value, and is set to by this marginal point when being judged as NO
Background dot;The edge feature image of target image is obtained according to remaining marginal point.
Further, described target image is the image in station symbol region.
Further, mark module 1012 here can include the first mark module not shown in figure
10121 and second mark module 10122;
First labelling submodule 10121 is used for from first direction to second direction, from third direction to four directions
To each pixel being carried out connected component labeling for the first time;
Second labelling submodule 10122 is used for from second direction to first direction, from fourth direction to third party
To each pixel after first time connected component labeling being carried out second time connected component labeling.
Further, each labelling submodule for being marginal point at a pixel, and with this pixel
When each pixel that point is adjacent has the marginal point being labeled, by the connected component labeling of this pixel
The connected domain mark being labeled by this marginal point being labeled;It is marginal point at a pixel,
And when each pixel adjacent with this pixel does not has labeled marginal point, by this pixel labelling
One new connected domain mark.
Device embodiment described above is only schematically, wherein said illustrates as separating component
Unit can be or may not be physically separate, the parts shown as unit can be or
Person may not be physical location, i.e. may be located at a place, or can also be distributed to multiple network
On unit.Some or all of module therein can be selected according to the actual needs to realize the present embodiment
The purpose of scheme.Those of ordinary skill in the art are not in the case of paying performing creative labour, the most permissible
Understand and implement.
Through the above description of the embodiments, those skilled in the art is it can be understood that arrive each reality
The mode of executing can add the mode of required general hardware platform by software and realize, naturally it is also possible to by firmly
Part.Based on such understanding, the portion that prior art is contributed by technique scheme the most in other words
Dividing and can embody with the form of software product, this computer software product can be stored in computer can
Read in storage medium, such as ROM/RAM, magnetic disc, CD etc., including some instructions with so that one
Computer equipment (can be personal computer, server, or the network equipment etc.) performs each to be implemented
The method described in some part of example or embodiment.
Last it is noted that above example is only in order to illustrate technical scheme, rather than to it
Limit;Although the present invention being described in detail with reference to previous embodiment, the ordinary skill of this area
Personnel it is understood that the technical scheme described in foregoing embodiments still can be modified by it, or
Person carries out equivalent to wherein portion of techniques feature;And these amendments or replacement, do not make corresponding skill
The essence of art scheme departs from the spirit and scope of various embodiments of the present invention technical scheme.
Claims (10)
1. an image processing method, it is characterised in that including:
Obtain the edge feature image of target image;
Described edge feature image is carried out connected component labeling;
For each connected domain in multiple connected domains of institute's labelling, it is judged that the edge that this connected domain has
Whether the quantity of point is more than the first preset value;And when being judged as NO, this connected domain is had each
Marginal point is all set to background dot.
2. the method for claim 1, it is characterised in that described target image is station symbol region
Image.
3. the method for claim 1, it is characterised in that the edge of described acquisition target image is special
Levy image, including:
Obtain the predetermined number width initial edge characteristic image that described target image is corresponding;
For each marginal point in multiple marginal points that each width initial edge characteristic image is comprised, really
Whether the number of times that this marginal point fixed same coordinate position in each width initial edge characteristic image occurs is more than
Second preset value, and when being judged as NO, this marginal point is set to background dot;
Remaining marginal point in each width initial edge characteristic image is merged the edge feature obtaining target image
Image.
4. the method for claim 1, it is characterised in that described described edge feature image is entered
Row connected component labeling, including:
From first direction to second direction, to fourth direction, each pixel is carried out first from third direction
Secondary connected component labeling;From second direction to first direction, from fourth direction to third direction to connecting for the first time
Each pixel after logical field mark carries out second time connected component labeling;Described first direction is left or right
In one, second direction is another in left or right;Described third direction is one in up or down,
Second direction is another in up or down.
5. method as claimed in claim 4, it is characterised in that connected component labeling includes each time:
It is marginal point at a pixel, and each pixel adjacent with this pixel has is marked
During the marginal point remembered, the connected component labeling of this pixel is labeled by this marginal point being labeled
Connected domain mark;It is marginal point at a pixel, and in each pixel adjacent with this pixel
When there is no labeled marginal point, by one new connected domain mark of this pixel labelling.
6. an image processing apparatus, it is characterised in that including:
Acquisition module, for obtaining the edge feature image of target image;
Mark module, for carrying out connected component labeling to described edge feature image;
Correcting module, for for each connection in the multiple connected domains in described edge feature image
Territory, it is judged that whether the quantity of the marginal point that this connected domain has is more than the first preset value;And be judged as NO
Time, each marginal point this connected domain being had all is set to background dot.
7. device as claimed in claim 1, it is characterised in that described acquisition module is specifically for obtaining
The predetermined number width initial edge characteristic image that described target image is corresponding;For each width initial edge feature
Each marginal point in multiple marginal points that image is comprised, it is judged that in this marginal point each width initial edge
Whether the number of times occurred at the same coordinate position in characteristic image is more than the second preset value, and is being judged as
Time no, this marginal point is set to background dot;The edge feature figure of target image is obtained according to remaining marginal point
Picture.
8. device as claimed in claim 1, it is characterised in that described target image is station symbol region
Image.
9. device as claimed in claim 1, it is characterised in that described mark module includes the first labelling
Module and the second mark module;
Described first labelling submodule is used for from first direction to second direction, from third direction to four directions
To each pixel being carried out connected component labeling for the first time;
Described second labelling submodule is used for from second direction to first direction, from fourth direction to third party
To each pixel after first time connected component labeling being carried out second time connected component labeling.
10. device as claimed in claim 9, it is characterised in that each labelling submodule is used for
One pixel is marginal point, and has, in each pixel adjacent with this pixel, the limit being labeled
During edge point, the connection that the connected component labeling of this pixel is labeled by this marginal point being labeled
Domain identifier;Be marginal point at a pixel, and in each pixel adjacent with this pixel not by
During the marginal point of labelling, this pixel is labeled as a new connected domain mark.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510824471.9A CN105869122A (en) | 2015-11-24 | 2015-11-24 | Image processing method and apparatus |
PCT/CN2016/086594 WO2017088462A1 (en) | 2015-11-24 | 2016-06-21 | Image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510824471.9A CN105869122A (en) | 2015-11-24 | 2015-11-24 | Image processing method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105869122A true CN105869122A (en) | 2016-08-17 |
Family
ID=56623754
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510824471.9A Pending CN105869122A (en) | 2015-11-24 | 2015-11-24 | Image processing method and apparatus |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105869122A (en) |
WO (1) | WO2017088462A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108629887A (en) * | 2017-03-17 | 2018-10-09 | 深圳怡化电脑股份有限公司 | Paper Currency Identification and device |
CN110188786A (en) * | 2019-04-11 | 2019-08-30 | 广西电网有限责任公司电力科学研究院 | A kind of robot graphics' recognizer for tank-type lightning arrester leakage current |
CN110223257A (en) * | 2019-06-11 | 2019-09-10 | 北京迈格威科技有限公司 | Obtain method, apparatus, computer equipment and the storage medium of disparity map |
CN111833398A (en) * | 2019-04-16 | 2020-10-27 | 杭州海康威视数字技术股份有限公司 | Method and device for marking pixel points in image |
CN112001406A (en) * | 2019-05-27 | 2020-11-27 | 杭州海康威视数字技术股份有限公司 | Text region detection method and device |
CN114387292A (en) * | 2022-03-25 | 2022-04-22 | 北京市农林科学院信息技术研究中心 | Image edge pixel point optimization method, device and equipment |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111986208A (en) * | 2019-10-25 | 2020-11-24 | 深圳市安达自动化软件有限公司 | Target mark positioning circle capturing and positioning method and device and computer equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060045346A1 (en) * | 2004-08-26 | 2006-03-02 | Hui Zhou | Method and apparatus for locating and extracting captions in a digital image |
CN101546424A (en) * | 2008-03-24 | 2009-09-30 | 富士通株式会社 | Method and device for processing image and watermark detection system |
CN101877055A (en) * | 2009-12-07 | 2010-11-03 | 北京中星微电子有限公司 | Method and device for positioning key feature point |
CN102426647A (en) * | 2011-10-28 | 2012-04-25 | Tcl集团股份有限公司 | Station identification method and device |
CN104504717A (en) * | 2014-12-31 | 2015-04-08 | 北京奇艺世纪科技有限公司 | Method and device for detection of image information |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104636706B (en) * | 2015-03-04 | 2017-12-26 | 深圳市金准生物医学工程有限公司 | One kind is based on gradient direction uniformity complex background bar code image automatic division method |
-
2015
- 2015-11-24 CN CN201510824471.9A patent/CN105869122A/en active Pending
-
2016
- 2016-06-21 WO PCT/CN2016/086594 patent/WO2017088462A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060045346A1 (en) * | 2004-08-26 | 2006-03-02 | Hui Zhou | Method and apparatus for locating and extracting captions in a digital image |
CN101546424A (en) * | 2008-03-24 | 2009-09-30 | 富士通株式会社 | Method and device for processing image and watermark detection system |
CN101877055A (en) * | 2009-12-07 | 2010-11-03 | 北京中星微电子有限公司 | Method and device for positioning key feature point |
CN102426647A (en) * | 2011-10-28 | 2012-04-25 | Tcl集团股份有限公司 | Station identification method and device |
CN104504717A (en) * | 2014-12-31 | 2015-04-08 | 北京奇艺世纪科技有限公司 | Method and device for detection of image information |
Non-Patent Citations (2)
Title |
---|
郑海华: "基于视觉的平地探测机器人避障研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
高红波 等: "一种二值图像连通区域标记的新算法", 《计算机应用》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108629887A (en) * | 2017-03-17 | 2018-10-09 | 深圳怡化电脑股份有限公司 | Paper Currency Identification and device |
CN108629887B (en) * | 2017-03-17 | 2021-02-02 | 深圳怡化电脑股份有限公司 | Paper money identification method and device |
CN110188786A (en) * | 2019-04-11 | 2019-08-30 | 广西电网有限责任公司电力科学研究院 | A kind of robot graphics' recognizer for tank-type lightning arrester leakage current |
CN110188786B (en) * | 2019-04-11 | 2022-12-06 | 广西电网有限责任公司电力科学研究院 | Robot image recognition algorithm for leakage current of pot-type lightning arrester |
CN111833398A (en) * | 2019-04-16 | 2020-10-27 | 杭州海康威视数字技术股份有限公司 | Method and device for marking pixel points in image |
CN111833398B (en) * | 2019-04-16 | 2023-09-08 | 杭州海康威视数字技术股份有限公司 | Pixel point marking method and device in image |
CN112001406A (en) * | 2019-05-27 | 2020-11-27 | 杭州海康威视数字技术股份有限公司 | Text region detection method and device |
CN112001406B (en) * | 2019-05-27 | 2023-09-08 | 杭州海康威视数字技术股份有限公司 | Text region detection method and device |
CN110223257A (en) * | 2019-06-11 | 2019-09-10 | 北京迈格威科技有限公司 | Obtain method, apparatus, computer equipment and the storage medium of disparity map |
CN110223257B (en) * | 2019-06-11 | 2021-07-09 | 北京迈格威科技有限公司 | Method and device for acquiring disparity map, computer equipment and storage medium |
CN114387292A (en) * | 2022-03-25 | 2022-04-22 | 北京市农林科学院信息技术研究中心 | Image edge pixel point optimization method, device and equipment |
CN114387292B (en) * | 2022-03-25 | 2022-07-01 | 北京市农林科学院信息技术研究中心 | Image edge pixel point optimization method, device and equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2017088462A1 (en) | 2017-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105869122A (en) | Image processing method and apparatus | |
EP3916627A1 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
CN103189899B (en) | Object display apparatus and object displaying method | |
DE102009036474B4 (en) | Image data compression method, pattern model positioning method in image processing, image processing apparatus, image processing program and computer-readable recording medium | |
EP2572317B1 (en) | Recognition of digital images | |
CN109409377B (en) | Method and device for detecting characters in image | |
CA2472524A1 (en) | Registration of separations | |
CN101286230B (en) | Image processing apparatus and method thereof | |
CN110458855B (en) | Image extraction method and related product | |
CN109598271B (en) | Character segmentation method and device | |
CN106033535A (en) | Electronic paper marking method | |
CN105095890A (en) | Character segmentation method and device in image | |
DE102018003475A1 (en) | Form-based graphic search | |
CN109377494A (en) | A kind of semantic segmentation method and apparatus for image | |
CN107016417A (en) | A kind of method and device of character recognition | |
CN106156691A (en) | The processing method of complex background image and device thereof | |
CN105354570A (en) | Method and system for precisely locating left and right boundaries of license plate | |
CN114998445A (en) | Image sparse point stereo matching method | |
CN113177941B (en) | Steel coil edge crack identification method, system, medium and terminal | |
CN114387450A (en) | Picture feature extraction method and device, storage medium and computer equipment | |
CN111881846B (en) | Image processing method, image processing apparatus, image processing device, image processing apparatus, storage medium, and computer program | |
CN106682670A (en) | Method and system for identifying station caption | |
EP2853089B1 (en) | Pattern processing apparatus, pattern processing method, and pattern processing program | |
CN111008987B (en) | Method and device for extracting edge image based on gray background and readable storage medium | |
CN107437257A (en) | Moving object segmentation and dividing method under a kind of mobile background |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20160817 |
|
WD01 | Invention patent application deemed withdrawn after publication |