CN103366174B - Method and system for obtaining image information - Google Patents

Method and system for obtaining image information Download PDF

Info

Publication number
CN103366174B
CN103366174B CN201310293942.9A CN201310293942A CN103366174B CN 103366174 B CN103366174 B CN 103366174B CN 201310293942 A CN201310293942 A CN 201310293942A CN 103366174 B CN103366174 B CN 103366174B
Authority
CN
China
Prior art keywords
window
impact point
image
data
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310293942.9A
Other languages
Chinese (zh)
Other versions
CN103366174A (en
Inventor
陈济棠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Taishan Sports Technology Co.,Ltd.
Original Assignee
Shenzhen Taishan Sports Technology Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Taishan Sports Technology Corp Ltd filed Critical Shenzhen Taishan Sports Technology Corp Ltd
Priority to CN201310293942.9A priority Critical patent/CN103366174B/en
Publication of CN103366174A publication Critical patent/CN103366174A/en
Application granted granted Critical
Publication of CN103366174B publication Critical patent/CN103366174B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a method and system for obtaining image information. The method includes the steps that S1, an image sensor is controlled to expose a whole exposure area, a total graph search is carried out on exposed image data to obtain a target point, a window range containing the target point is set, and a window position is recorded; S2, the image sensor is controlled to expose the window position area recorded in the exposure area to extract the exposed image data; S3, feature extraction is carried out on the data extracted in the S2, whether the target point exists or not is calculated in a matching mode, if the target point does not exist, jumping to the S1 is performed, and if the target point exists, the S4 is performed; S4, whether the target point and a window meet a preset positional relationship or not is judged, if the target point and the window do not meet the preset positional relationship, the window position is adjusted according to a preset adjustment strategy, moreover, a window position record is updated, and returning to the S2 is performed; when a suspending signal is detected, the process is finished. By the mode that target predication and the image sensor are combined to perform partial exposure to obtain the data in the prediction range, data obtaining quantity and image matching calculated quantity are reduced, and speed and accuracy for tracking a target are improved.

Description

The method and system of image information acquisition
Technical field
The present invention relates to picture match and target tracking domain, particularly to a kind of method of image information acquisition and be System.
Background technology
Sequence image is carried out need to solve in target following always computer vision, image procossing and area of pattern recognition Important Problems certainly.It has a wide range of applications, such as target following, industrial products monitoring, traffic intersection monitoring etc..
But existing major part processing mode, is both for entire image and is processed, in whole image, target is entered Line search matched jamming so that larger to the matching primitives amount of image it is impossible to quickly be followed the tracks of to target.
Content of the invention
The present invention provide a kind of method and system of image information acquisition, solve prior art in images match amount of calculation relatively Greatly, the slower problem of target tracking speed.
For solving above-mentioned technical problem, the invention provides a kind of method of image information acquisition, comprise the following steps:
S1, control imageing sensor that overall exposing region is exposed, extract exposure image data and carry out full figure and search Rope obtains impact point, based on this impact point, sets the window ranges comprising impact point, the set window of record exists Position in imageing sensor overall exposing region;
S2, control imageing sensor are exposed to the window's position region recorded in exposure area, extract and expose The view data in window ranges arriving;
S3, feature extraction is carried out to the data in window ranges, matching primitives whether there is impact point;If otherwise redirect holding Row step S1, if then execution step S4;
In S4, calculation procedure S3 obtain impact point in the position in window area, judge between impact point and window be No meet default position relationship, if then directly returning execution step S2, if otherwise according to default window adjustable strategies, adjust The window's position after whole the window's position, and the adjustment of more new record, is then back to execution step S2.
Wherein, in described step S1 to S4, the moment has monitored whether that abort signal arrives, if then terminating image information Obtain.
Further, it is based on this impact point in step S1, set the window ranges comprising impact point, including:Base In this impact point, the window ranges of setting point centered on impact point.
Further, in calculation procedure S3 in step S4 obtain impact point in the position of window area, judge impact point and Whether meet default position relationship between window, if otherwise according to default window adjustable strategies, adjust the window's position, bag Include:Obtaining impact point behind the position of window area, judging whether impact point is located at window center, if otherwise adjusting window position Put, the window center after making impact point be located at adjustment position.
Further, in calculation procedure S3 in step S4 obtain impact point in the position of window area, judge impact point and Whether meet default position relationship between window, if otherwise according to default window adjustable strategies, adjust the window's position, bag Include:Obtaining impact point behind the position of window area, calculating the pixel distance apart from edge of window edge for the impact point, judge this pixel Whether distance is more than default distance threshold, if otherwise adjusting the window's position, the window center after making impact point be located at adjustment.
Further, window is the square window that default size is 21 pixel * 21 pixel, and default distance threshold is 5 Pixel.
Present invention also offers a kind of image information acquisition system, including:
Image sensor cell, for obtaining exposure image;
Control unit, for controlling image sensor cell to the window recorded in overall exposing region or exposure area The band of position is exposed, and when abort signal arrives, control system quits work;
Full figure search and processing unit, for when image sensor cell is exposed to overall exposing region, extracting Expose the view data that obtains and carry out full figure search and obtain impact point, set out one based on this impact point and comprise impact point and exist Interior window ranges;
Window data acquiring unit, in image sensor cell to the window's position region recorded in exposure area When being exposed, extract the view data in the window ranges that exposure obtains;
Matching primitives and processing unit, the view data for obtaining to window data acquiring unit carries out feature extraction, Matching primitives whether there is impact point;
Coordinate calculates and the window's position adjustment unit, for the mesh that matching primitives and the matched calculating of processing unit are found The position of punctuate is calculated, and judges whether meet default position relationship between impact point and window, if otherwise according to default Window adjustable strategies, adjust the window's position;
Window coordinates storage unit, for record window positional information, described the window's position is window in imageing sensor Position in unit overall exposing region;And
Signal abort unit, for producing abort signal to control unit.
Further, the window ranges that full figure search and processing unit are set based on impact point point centered on impact point.
Further, whether coordinate calculates and the window's position adjustment unit, judge to meet between impact point and window default Position relationship, if otherwise according to default window adjustable strategies, adjusts the window's position, including:Judge whether impact point is located at window Mouth center, if otherwise adjust the window's position, the window center after making impact point be located at adjustment position.
Further, whether coordinate calculates and the window's position adjustment unit, judge to meet between impact point and window default Position relationship, if otherwise according to default window adjustable strategies, adjusts the window's position, including:Calculate impact point apart from edge of window The pixel distance on edge, judges whether this pixel distance is more than default distance threshold, if otherwise adjusting the window's position, makes impact point It is located at the window center after adjustment.
Further, window is the square window that default size is 21 pixel * 21 pixel, and default distance threshold is 5 Pixel.
The invention has the beneficial effects as follows:The method and system of the image information acquisition that the present invention provides, in target following Cheng Zhong, the initial entire image that obtains carries out after full figure search obtains following the tracks of target, target in follow-up two field picture being likely to occur Position is predicted, and controls imageing sensor to carry out partial exposure, obtains the image in estimation range, thus only to this local number According to carrying out matching primitives, find impact point.And when target is not in estimation range, just can carry out the acquisition of entire image data With search.The present invention is by predicting target zone and carrying out partial exposure with reference to imageing sensor and only obtain estimation range data Mode, reduces data acquisition amount and the matching primitives amount of image, improves the tracking velocity to target and precision.
Brief description
For the technical scheme being illustrated more clearly that in the embodiment of the present invention, will make to required in embodiment description below Accompanying drawing be briefly described it should be apparent that, drawings in the following description are only some embodiments of the present invention, for For those of ordinary skill in the art, on the premise of not paying creative work, other can also be obtained according to these accompanying drawings Accompanying drawing, wherein:
Fig. 1 is the flow chart of image information acquisition method of the present invention;
Fig. 2 a-2c is that the present invention initially obtains the process schematic diagram following the tracks of target;
Fig. 3 is the first schematic diagram that the present invention carries out window adjustment;
Fig. 4 is the second schematic diagram that the present invention carries out window adjustment;
Fig. 5 is the modular structure schematic diagram of image information acquisition of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation describes it is clear that described embodiment is only a part of embodiment of the present invention, rather than whole embodiments.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of not making creative work Embodiment, broadly falls into the scope of protection of the invention.
As shown in figure 1, a kind of method of image information acquisition of the present invention, including step:
S1, control imageing sensor that overall exposing region is exposed, extract exposure image data and carry out full figure and search Rope obtains impact point, based on this impact point, sets the window ranges comprising impact point, and this window ranges is next frame View data scope to be obtained, position in imageing sensor overall exposing region for the set window of record.
Control imageing sensor that overall exposing region is exposed in step S1, this imageing sensor can adopt electric charge The imageing sensor of coupled apparatus (CCD) or complementary metal oxide semiconductors (CMOS) (CMOS) type is carried out.Image acquisition areas are entered During row IMAQ, the exposure image that definition imageing sensor is exposed obtaining to overall exposing region is entire image.
Extract this frame entire image data, carry out full figure search, search meets the impact point of feature.This feature can be figure Color in picture, shape or three-dimensional data etc..And impact point refers to user's point interested, as will be tracked Target.It is circular point as through signature search, found impact point to be followed the tracks of, or be the maximum point of area, reflective Point etc..After finding this impact point, based on this impact point, set the initial window ranges comprising impact point, should Window ranges are next frame view data scope to be obtained, and that is, in next two field picture, prediction impact point appears in this window In the range of mouthful.The window's position setting is recorded, and this window's position refers in imageing sensor overall exposing region Position.
Fig. 2 a-2c is the initial process schematic diagram obtaining and following the tracks of target.Fig. 2 a represents imageing sensor to overall exposing area Domain is exposed, and extracts the data of the entire image obtaining.There is the characteristic point such as square, circular in this view data.It is assumed that such as Shown in Fig. 2 b, impact point to be followed the tracks of is circular point P, through full figure data search, after characteristic matching, have found mesh Punctuate P.After finding this impact point, set initial window ranges, as shown in Figure 2 c, the square-shaped frame that dotted line represents is The home window scope setting, the window's position point centered on impact point P.Certainly, window shape is not limited in square, also Can be triangle, rectangle etc., and the initial the window's position setting also can not centered on impact point point, but there is it Its relative position relation.And the size of window may be referred to object empirical motion speed and set, such as according to object experience Movement velocity, impact point is only possible to 10 pixels of displacement between two field pictures, and therefore, we can arrange window size and are 21*21 pixel size.
S2, control imageing sensor are exposed to the window's position region recorded in exposure area, extract and expose The view data in window ranges arriving.
In this step, the interface that provided by hardware, control imageing sensor only to the window recorded in exposure area The band of position is exposed, and extracts exposure and obtains the data in window ranges.Due to being controlled to imageing sensor, only carry out The exposure of wicket scope, obtains the image of small range, and data acquisition amount reduces, and the data processing amount of every frame accordingly reduces, often The transmission speed of frame data is also improved.
When every frame data collection capacity reduces, the transmission demand of a frame image data will be reduced, thus in image transmitting Middle can reach higher frame number.In exporting in video image, due to every frame obtain be entire image data, its highest adopts Collection frame per second is 30 frames per second.And only obtaining the view data of small range, in the case of reducing every frame data collection capacity it is possible to Reach 100 multiframes per second, the frame number of even more high.
Additionally, under same IMAQ frame per second, because every frame data need data volume to be processed to tail off, data output Speed improve, then the processing speed of data can be become faster.Processing speed then allows in whole data handling procedure faster In, temporal distribution is more freely so that can be processed to data using advanced algorithm further.As to video figure In the processing procedure of picture, for the process of its 30 two field picture per second, often because not making to process to the smoothing processing of image Effect is bad, but increases the problem that the number of times to picture smooth treatment can produce delay again.And only obtaining the image of small range Data, in the case of improving data processing speed, because time distribution is more freely it is possible to increase to picture smooth treatment Number of times, thus reach more preferable treatment effect.
S3, feature extraction is carried out to the data in window ranges, matching primitives whether there is impact point;If otherwise redirect holding Row step S1, if then execution step S4.
In this step, the data for the window ranges obtaining carries out feature extraction, such as the gray scale of image, length-width ratio, figure Coordinate of picture etc., after extracting feature, carries out characteristic matching these features with the impact point in previous frame view data.? Joining algorithm is some algorithms commonly used in the art, such as according to the intensity correlation matching of image, the form fit degree of image, three-dimensional sits Mark distance of two interframe etc..Through characteristic matching, find this frame and matching degree highest point in previous frame, this point is impact point P.If through characteristic matching, not finding impact point, such as impact point is a reflective spot, and this point is that image acquisition areas are uniquely anti- Luminous point, so, when impact point is not in the window ranges of prediction, processes through characteristic matching and just cannot find impact point.
When impact point cannot be found, the interface that provided by hardware, control imageing sensor that overall exposing region is entered Row exposure, extracts the data of the entire image that exposure obtains, and carries out full figure search and again finds impact point, and is based on this target Point, resets window ranges, that is, return execution step S1.
And when feature extraction coupling is carried out to the view data in the window ranges predicted, find impact point to be followed the tracks of When, then continue executing with step S4.
Whether the impact point obtaining in S4, calculation procedure S3, in the position of window area, judges between impact point and window Meeting default position relationship, if then directly returning execution step S2, if otherwise according to default window adjustable strategies, adjusting The window's position after the window's position, and the adjustment of more new record, is then back to execution step S2.
In this step, for the impact point finding in step S3, calculate it in imageing sensor overall exposing region Position, unit is pixel.Judge whether to meet default position relationship between impact point and window, if so, just direct return is held Row step S2, now, in next frame image data of collection, controls imageing sensor the window's position region to be exposed Constant;If it is not, when being unsatisfactory for default position relationship between impact point and window, then according to default window adjustable strategies, Adjustment window, that is, change in next frame image data of collection, controls imageing sensor the window's position area to be exposed Domain.
For " judging whether meet default position relationship between impact point and window, if otherwise according to default window Adjustable strategies, adjust window ", two kind concrete situations are set forth below and illustrate, other similar situations can be by that analogy.
The first:Judge whether impact point is located at window center, if otherwise adjusting the window's position, adjustable strategies are to make mesh The center of window after adjustment position for the punctuate.As shown in figure 3, square-shaped frame represent be obtain present frame wicket model Enclose view data.Through feature extraction coupling is carried out to the data in window, find impact point P.In the present embodiment, square The size of window is 20*20 pixel size.Before not adjusting window, the window upper left corner and the lower right corner are located at imageing sensor respectively (300,100) and (320,120) on overall exposing region, impact point P is then located on imageing sensor overall exposing region (316,106), unit is pixel.After testing, impact point P is not located at the center of wicket, therefore, need to adjust window position Put.Its upper left corner of the window's position after adjustment and bottom right angular coordinate are respectively (306,96) and (326,116).
Second:Whether the detection pixel distance apart from edge of window edge for the impact point, judge this pixel distance more than default Distance threshold L, if otherwise adjusting the window's position, adjustable strategies are to make impact point be located at the center of the window after adjustment.Default Distance threshold L carries out respective settings according to the size of window and the frequency of adjustment.As shown in figure 4, what square-shaped frame represented is to obtain The view data of the present frame wicket scope taking.Through feature extraction coupling is carried out to the data in window, find impact point P.In the present embodiment, the size of square window is 21*21 pixel size, and default distance threshold L is 5 pixels.Do not adjust Before whole window, (300,100) that the window upper left corner and the lower right corner are located on imageing sensor overall exposing region respectively and (321, 121) (316,106) that, impact point P is then located on imageing sensor overall exposing region, unit is pixel.After testing, mesh The pixel distance on punctuate P edge on the right of window is 5 pixels, equal to default distance threshold L.According to default judgement bar Part, needs the window's position is adjusted, the coordinate in its upper left corner of the window after adjustment and the lower right corner be respectively (306,96) and (326,116), impact point P (316,106), the center of the window after adjustment.
When being adjusted to the position of window according to adjustable strategies, then update record window positional information, return execution Step S2, that is, start, when gathering next frame image data, to control imageing sensor to the window recorded in overall exposing region The mouth band of position carries out partial exposure, extracts the view data in the range of wicket, continues to the impact point in next two field picture It is tracked.
In abovementioned steps S1 to S4, it is both needed to constantly monitor whether that abort signal arrives, when abort signal arrives, just Terminate the whole flow process of image information acquisition, otherwise just according to step S1~S4 circulation execution whole flow process.
In object tracking process, the initial entire image that obtains carries out after full figure search obtains following the tracks of target, to subsequent frame The position that in image, target is likely to occur is predicted, and controls imageing sensor to carry out partial exposure, obtains in estimation range Image, thus only this local data is carried out with matching primitives, finds impact point.And when in target not in estimation range, Acquisition and the search of entire image can be carried out.Therefore, the method that the present invention provides is passed through to predict the scope of target appearance and combine Imageing sensor carries out the mode that partial exposure only obtains data in estimation range, reduce data acquisition amount and image Join amount of calculation, improve the tracking velocity to target and precision.
As shown in Figure 5.A kind of system of image information acquisition of the present invention, including:
Image sensor cell, for obtaining exposure image;
Control unit, for controlling image sensor cell to the window recorded in overall exposing region or exposure area The band of position is exposed, and when abort signal arrives, control system quits work;
Full figure search and processing unit, for when image sensor cell is exposed to overall exposing region, extracting Expose the view data that obtains and carry out full figure search and obtain impact point, based on this impact point, set one and comprise impact point and exist Interior window ranges;
Window data acquiring unit, in image sensor cell to the window's position region recorded in exposure area When being exposed, extract the view data in the window ranges that exposure obtains;
Matching primitives and processing unit, for the position to the impact point that matching primitives and the matched calculating of processing unit find Put and calculated, judge whether meet default position relationship between impact point and window, if otherwise being adjusted according to default window Whole strategy, adjusts the window's position;
Coordinate calculates and the window's position adjustment unit, overall in imageing sensor for calculating impact point in current frame image Position in exposure area, and according to the relation between aiming spot and the window's position, adjust the window's position, more new record is adjusted The window's position after whole;
Window coordinates storage unit, for record window positional information, described the window's position is window in imageing sensor Position in unit overall exposing region;And
Signal abort unit, for sending abort signal to control unit.
In system, image sensor cell can be charge-coupled image sensor (CCD) or complementary metal oxide semiconductors (CMOS) (CMOS) imageing sensor of type.
When carrying out IMAQ, the control of controlled unit, image sensor cell exposes to overall exposing region Light or only the local location in overall exposing region is exposed.
When system startup work, when gathering 1 two field picture, image sensor cell is exposed obtaining to overall exposing region Take entire image, full figure search and processing unit extract this frame entire image data, carry out full figure search, and search meets feature Impact point.This feature can be color in image, shape or three-dimensional data etc..And impact point refers to that user's sense is emerging The point of interest, as target to be tracked.It is circular point as through signature search, found impact point to be followed the tracks of, and or Person is the maximum point of area, reflective spot etc..After finding this impact point, based on this impact point, what setting one was initial comprises mesh Punctuate is in interior window ranges.This window ranges is next frame view data scope to be obtained, that is, predict in next two field picture Impact point appear in this window ranges.Wherein, the window ranges of setting can be square window, and the window's position is with mesh Centered on punctuate.Certainly, window shape is not limited in square, can also be triangle, rectangle etc..The initial bit of window Put also can not centered on impact point point, but there are other relative position relations.And the position of window refers to window in image Position in sensor overall exposing region, the size of window may be referred to object empirical motion speed and set, such as basis Object empirical motion speed, impact point is only possible to 10 pixels of displacement between two field pictures, and therefore, setting window size can be 21*21 pixel size.
Obtain entire image in full figure search and processing unit, carry out full figure and search element finding impact point, and be based on this target Point, after setting initial the window's position, window coordinates this window position information of storage unit record.
After setting initial the window's position, for next frame (the 2nd frame) image to be gathered, control unit calls window The window position information of record in coordinate storage unit, controls image sensor cell only to recorded in overall exposing region The window's position region is exposed.Window data acquiring unit extracts the view data in the window ranges that exposure obtains.
Because controlled unit controls, image sensor cell only carries out the exposure of wicket scope, obtains small range Image, data acquisition amount decreases, and the data processing amount of every frame accordingly reduces, and the transmission speed of every frame data is also carried High.
When every frame data collection capacity reduces, the transmission demand of a frame image data will be reduced, thus in image transmitting Middle can reach higher frame number.In exporting in video image, due to every frame obtain be entire image data, its highest adopts Collection frame per second is 30 frames per second.And only obtaining the view data of small range, in the case of reducing every frame data collection capacity it is possible to Reach 100 multiframes per second, the frame number of even more high.
Additionally, when under same IMAQ frame per second, because every frame data need data volume to be processed to tail off, data The speed of output improves, then the processing speed of data can be become faster.Processing speed then allows in whole data processing faster During, temporal distribution is more freely so that can be processed to data using advanced algorithm further.As to regarding In the processing procedure of frequency image, for the process of its 30 two field picture per second, often because not making to the smoothing processing of image Treatment effect is bad, but increases the problem that the number of times to picture smooth treatment can produce delay again.And only obtaining small range View data, in the case of improving data processing speed, because time distribution is more freely it is possible to increase to image smoothing The number of times of reason, thus reach more preferable treatment effect.
The view data extracted for window data acquiring unit, matching primitives and processing unit carry out feature to data and carry Take, such as the gray scale of image, length-width ratio, coordinate of image etc., after extracting feature, these features and previous frame view data In impact point carry out characteristic matching.Matching algorithm is some algorithms commonly used in the art, such as related of the gray scale according to image Join, the form fit degree of image, distance of three-dimensional coordinate two interframe etc..Through characteristic matching, find in this frame and previous frame Degree of joining highest point, this point is impact point.
If in characteristic matching, the 2nd two field picture collecting, finding impact point, that is, impact point is in the window model of prediction In enclosing, judge whether meet default position relationship between impact point and window, if so, then the window's position is constant;If it is not, then root According to default window adjustable strategies, adjust the window's position, and update the window position information of window coordinates storage unit preservation.
Now, to next frame (the 3rd frame) view data to be collected, and follow-up frame image data, if through to image Data is made characteristic matching and is processed, and all can find impact point, and that is, in the window ranges of prediction, then this two field picture is whole for impact point Processing procedure is identical with the process of the 2nd two field picture.In the view data collecting, do not find target through characteristic matching During point, then reset home window position.
Wherein, for " judge whether meet default position relationship between impact point and window, when being unsatisfactory for, according to Default window adjustable strategies adjust window ", some concrete situations are set forth below and illustrate, other similar situations can be such Push away.
The first:Judge whether impact point is located at window center, if it is not, then window adjustable strategies are, so that impact point is located at The center of the window after adjustment position;
Second:Whether the detection pixel distance apart from edge of window edge for the impact point, judge this pixel distance more than default Distance threshold, if it is not, then window adjustable strategies are, makes impact point be located at the center of the window after adjustment.
For this two kinds of window regulation methods, have a detailed description in the method for image information acquisition, no longer superfluous here State.
But, if in characteristic matching, the 2nd two field picture collecting, not finding impact point, such as impact point is one anti- Luminous point, and this reflective spot is the unique reflective spot in image acquisition areas, so, when impact point is not in the window ranges of prediction, Through characteristic matching, this impact point just cannot be found to the view data in the range of this.Now, for next two field picture to be gathered (the 3rd frame), control unit controls image sensor cell that overall exposing region is exposed, and carries out full figure search and again finds Impact point, and it is based on this impact point, reset home window position.Now, to next frame (the 4th frame) picture number to be collected According to, and follow-up frame image data, if processed through view data is made with characteristic matching, impact point can be found, that is, impact point exists In the window ranges of prediction, then the whole processing procedure of this two field picture is identical with the process of the 3rd two field picture.Until the figure collecting As, in data, when characteristic matching does not find impact point, then resetting home window position.
In system processing procedure, when signal abort unit is detected and sending abort signal, control unit control system Quit work.
Can be seen that when carrying out target following from system processing procedure, the initial entire image that obtains carries out full figure search After obtaining following the tracks of target, the position that target in follow-up two field picture is likely to occur is predicted, and is entered by controlling imageing sensor Row partial exposure, obtains the image in estimation range, thus only this local data is carried out with matching primitives, finds impact point.And When in target not in estimation range, just can carry out acquisition and the search of entire image data.The system is passed through to predict target Scope and carry out, with reference to imageing sensor, the mode that partial exposure only obtains estimation range data, reduce data acquisition amount with And the matching primitives amount of image, improve the tracking velocity to target and precision.
The foregoing is only the exemplary embodiment of the present invention, not thereby limit the scope of patent protection of the present invention, all It is the equivalent structure or equivalent flow conversion made using description of the invention and accompanying drawing content, or be directly or indirectly used in it The technical field of his correlation, is included within the scope of the present invention.

Claims (10)

1. a kind of method of image information acquisition is it is characterised in that comprise the following steps:
S1, control imageing sensor that overall exposing region is exposed, extract exposure image data and carry out full figure and search for To impact point, based on this impact point, set the window ranges comprising impact point, the set window of record is in image Position in sensor overall exposing region;
S2, control imageing sensor are exposed to the window's position region recorded in exposure area, extract what exposure obtained View data in window ranges;
S3, feature extraction is carried out to the data in window ranges, matching primitives whether there is impact point;If otherwise redirecting and executing step Rapid S1, if then execution step S4;
The impact point obtaining in S4, calculation procedure S3, in the position of window area, judges whether meet between impact point and window Default position relationship, if then directly returning execution step S2, if otherwise adjusting the window's position, makes impact point be located at adjustment position The window's position behind the center of the window postponing, and the adjustment of more new record, is then back to execution step S2;
Wherein, in described step S1 to S4, the moment has monitored whether that abort signal arrives, if then terminating image information acquisition.
2. the method for image information acquisition according to claim 1 is it is characterised in that be based on this target in described step S1 Point, sets the window ranges comprising impact point, including:Based on this impact point, the window ranges of setting are with impact point Centered on point.
3. the method for image information acquisition according to claim 2 is it is characterised in that calculation procedure S3 in described step S4 The impact point of middle acquisition, in the position of window area, judges whether to meet default position relationship between impact point and window, bag Include:Obtaining impact point behind the position of window area, judging whether impact point is located at window center.
4. the method for image information acquisition according to claim 2 is it is characterised in that calculation procedure S3 in described step S4 The impact point of middle acquisition, in the position of window area, judges whether to meet default position relationship between impact point and window, bag Include:Obtaining impact point behind the position of window area, calculating the pixel distance apart from edge of window edge for the impact point, judge this pixel Whether distance is more than default distance threshold.
5. the method for image information acquisition according to claim 4 is it is characterised in that described window is default size is 21 The square window of pixel * 21 pixel, described default distance threshold is 5 pixels.
6. a kind of image information acquisition system is it is characterised in that this system includes:
Image sensor cell, for obtaining exposure image;
Control unit, for controlling image sensor cell to the window's position recorded in overall exposing region or exposure area Region is exposed, and when abort signal arrives, control system quits work;
Full figure search and processing unit, for when image sensor cell is exposed to overall exposing region, extracting exposure The view data that obtains simultaneously carries out full figure search and obtains impact point, sets out one based on this impact point and comprises impact point Window ranges;
Window data acquiring unit, for carrying out to the window's position region recorded in exposure area in image sensor cell During exposure, extract the view data in the window ranges that exposure obtains;
Matching primitives and processing unit, the view data for obtaining to window data acquiring unit carries out feature extraction, coupling Calculate and whether there is impact point;
Coordinate calculates and the window's position adjustment unit, for the impact point that matching primitives and the matched calculating of processing unit are found Position calculated, judge whether meet default position relationship between impact point and window, if otherwise adjust the window's position, The center of the window after making impact point be located at adjustment position;
Window coordinates storage unit, for record window positional information, described the window's position is window in image sensor cell Position in overall exposing region;And
Signal abort unit, for sending abort signal to control unit.
7. image information acquisition system according to claim 6 is it is characterised in that full figure search and processing unit are based on mesh The window ranges that punctuate sets point centered on impact point.
8. image information acquisition system according to claim 7 is it is characterised in that described coordinate calculates and the window's position is adjusted Whole unit, judges whether to meet default position relationship between impact point and window, including:Judge whether impact point is located at window Center.
9. image information acquisition system according to claim 7 it is characterised in that:Described coordinate calculates and the window's position is adjusted Whole unit, judges whether to meet default position relationship between impact point and window, including:Calculate impact point apart from edge of window edge Pixel distance, judge that whether this pixel distance is more than default distance threshold.
10. image information acquisition system according to claim 9 it is characterised in that:Described window is default size is 21 The square window of pixel * 21 pixel, described default distance threshold is 5 pixels.
CN201310293942.9A 2013-07-05 2013-07-05 Method and system for obtaining image information Active CN103366174B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310293942.9A CN103366174B (en) 2013-07-05 2013-07-05 Method and system for obtaining image information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310293942.9A CN103366174B (en) 2013-07-05 2013-07-05 Method and system for obtaining image information

Publications (2)

Publication Number Publication Date
CN103366174A CN103366174A (en) 2013-10-23
CN103366174B true CN103366174B (en) 2017-02-08

Family

ID=49367476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310293942.9A Active CN103366174B (en) 2013-07-05 2013-07-05 Method and system for obtaining image information

Country Status (1)

Country Link
CN (1) CN103366174B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886707B (en) * 2014-03-04 2016-07-13 深圳市敢为软件技术有限公司 The method and system of alarm
CN105163037A (en) * 2014-06-04 2015-12-16 苏州宝时得电动工具有限公司 Designated area exposure method for intelligent mower, and intelligent mower

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402551A (en) * 2001-08-07 2003-03-12 三星电子株式会社 Apparatus and method for automatically tracking mobile object
CN101294953A (en) * 2008-06-05 2008-10-29 中国农业大学 Motor cell real-time tracing system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7620216B2 (en) * 2006-06-14 2009-11-17 Delphi Technologies, Inc. Method of tracking a human eye in a video image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402551A (en) * 2001-08-07 2003-03-12 三星电子株式会社 Apparatus and method for automatically tracking mobile object
CN101294953A (en) * 2008-06-05 2008-10-29 中国农业大学 Motor cell real-time tracing system and method

Also Published As

Publication number Publication date
CN103366174A (en) 2013-10-23

Similar Documents

Publication Publication Date Title
CN109151439B (en) Automatic tracking shooting system and method based on vision
CN106094875B (en) A kind of target follow-up control method of mobile robot
KR100727033B1 (en) Apparatus and method for vision processing on network based intelligent service robot system and the system using the same
EP3190781B1 (en) Autofocus method, device and electronic apparatus
JP6141079B2 (en) Image processing system, image processing apparatus, control method therefor, and program
US9767568B2 (en) Image processor, image processing method, and computer program
JP5484184B2 (en) Image processing apparatus, image processing method, and program
CN107409175A (en) Follow-up control apparatus, tracking and controlling method, tracing control program and automatic follow shot system
CN111242025B (en) Real-time action monitoring method based on YOLO
CN107992099A (en) A kind of target sport video tracking and system based on improvement frame difference method
EP3319042A1 (en) Target tracking device and target tracking method
WO2018086461A1 (en) Visual tracking method based on monocular gesture recognition, and robot
CN111104910B (en) Garbage delivery behavior supervision method and related products
CN106558224B (en) A kind of traffic intelligent monitoring and managing method based on computer vision
WO2017201663A1 (en) Moving object monitoring method, wearable apparatus, and server
KR102315525B1 (en) Surveillance system and operation method thereof
CN108888204B (en) Floor sweeping robot calling device and method
CN103366174B (en) Method and system for obtaining image information
JP5839796B2 (en) Information processing apparatus, information processing system, information processing method, and program
JP5625443B2 (en) Imaging system and imaging apparatus
JP4235018B2 (en) Moving object detection apparatus, moving object detection method, and moving object detection program
US9215426B2 (en) Imaging apparatus and imaging method
CN106325278B (en) A kind of robot localization air navigation aid based on Aleuroglyphus ovatus
CN107734254A (en) A kind of unmanned plane is selected a good opportunity photographic method automatically
CN111031245A (en) Controller and control method for adjusting industrial camera lens

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518057 Nanshan District science and Technology Park, Guangdong, Fang Fang, building, building, floor, 4

Applicant after: SHENZHEN TAISHAN SPORTS TECHNOLOGY CORP., LTD.

Address before: 518057 Nanshan District science and Technology Park, Guangdong, Fang Fang, building, building, floor, 4

Applicant before: Shenzhen Tol Technology Co., Ltd.

COR Change of bibliographic data
C14 Grant of patent or utility model
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 518057 4th floor, Fangda building, Science Park, Nanshan District, Shenzhen, Guangdong

Patentee after: Shenzhen Taishan Sports Technology Co.,Ltd.

Address before: 518057 4th floor, Fangda building, Science Park, Nanshan District, Shenzhen, Guangdong

Patentee before: SHENZHEN TAISHAN SPORTS TECHNOLOGY Corp.,Ltd.

CP01 Change in the name or title of a patent holder