Content of the invention
The present invention provide a kind of method and system of image information acquisition, solve prior art in images match amount of calculation relatively
Greatly, the slower problem of target tracking speed.
For solving above-mentioned technical problem, the invention provides a kind of method of image information acquisition, comprise the following steps:
S1, control imageing sensor that overall exposing region is exposed, extract exposure image data and carry out full figure and search
Rope obtains impact point, based on this impact point, sets the window ranges comprising impact point, the set window of record exists
Position in imageing sensor overall exposing region;
S2, control imageing sensor are exposed to the window's position region recorded in exposure area, extract and expose
The view data in window ranges arriving;
S3, feature extraction is carried out to the data in window ranges, matching primitives whether there is impact point;If otherwise redirect holding
Row step S1, if then execution step S4;
In S4, calculation procedure S3 obtain impact point in the position in window area, judge between impact point and window be
No meet default position relationship, if then directly returning execution step S2, if otherwise according to default window adjustable strategies, adjust
The window's position after whole the window's position, and the adjustment of more new record, is then back to execution step S2.
Wherein, in described step S1 to S4, the moment has monitored whether that abort signal arrives, if then terminating image information
Obtain.
Further, it is based on this impact point in step S1, set the window ranges comprising impact point, including:Base
In this impact point, the window ranges of setting point centered on impact point.
Further, in calculation procedure S3 in step S4 obtain impact point in the position of window area, judge impact point and
Whether meet default position relationship between window, if otherwise according to default window adjustable strategies, adjust the window's position, bag
Include:Obtaining impact point behind the position of window area, judging whether impact point is located at window center, if otherwise adjusting window position
Put, the window center after making impact point be located at adjustment position.
Further, in calculation procedure S3 in step S4 obtain impact point in the position of window area, judge impact point and
Whether meet default position relationship between window, if otherwise according to default window adjustable strategies, adjust the window's position, bag
Include:Obtaining impact point behind the position of window area, calculating the pixel distance apart from edge of window edge for the impact point, judge this pixel
Whether distance is more than default distance threshold, if otherwise adjusting the window's position, the window center after making impact point be located at adjustment.
Further, window is the square window that default size is 21 pixel * 21 pixel, and default distance threshold is 5
Pixel.
Present invention also offers a kind of image information acquisition system, including:
Image sensor cell, for obtaining exposure image;
Control unit, for controlling image sensor cell to the window recorded in overall exposing region or exposure area
The band of position is exposed, and when abort signal arrives, control system quits work;
Full figure search and processing unit, for when image sensor cell is exposed to overall exposing region, extracting
Expose the view data that obtains and carry out full figure search and obtain impact point, set out one based on this impact point and comprise impact point and exist
Interior window ranges;
Window data acquiring unit, in image sensor cell to the window's position region recorded in exposure area
When being exposed, extract the view data in the window ranges that exposure obtains;
Matching primitives and processing unit, the view data for obtaining to window data acquiring unit carries out feature extraction,
Matching primitives whether there is impact point;
Coordinate calculates and the window's position adjustment unit, for the mesh that matching primitives and the matched calculating of processing unit are found
The position of punctuate is calculated, and judges whether meet default position relationship between impact point and window, if otherwise according to default
Window adjustable strategies, adjust the window's position;
Window coordinates storage unit, for record window positional information, described the window's position is window in imageing sensor
Position in unit overall exposing region;And
Signal abort unit, for producing abort signal to control unit.
Further, the window ranges that full figure search and processing unit are set based on impact point point centered on impact point.
Further, whether coordinate calculates and the window's position adjustment unit, judge to meet between impact point and window default
Position relationship, if otherwise according to default window adjustable strategies, adjusts the window's position, including:Judge whether impact point is located at window
Mouth center, if otherwise adjust the window's position, the window center after making impact point be located at adjustment position.
Further, whether coordinate calculates and the window's position adjustment unit, judge to meet between impact point and window default
Position relationship, if otherwise according to default window adjustable strategies, adjusts the window's position, including:Calculate impact point apart from edge of window
The pixel distance on edge, judges whether this pixel distance is more than default distance threshold, if otherwise adjusting the window's position, makes impact point
It is located at the window center after adjustment.
Further, window is the square window that default size is 21 pixel * 21 pixel, and default distance threshold is 5
Pixel.
The invention has the beneficial effects as follows:The method and system of the image information acquisition that the present invention provides, in target following
Cheng Zhong, the initial entire image that obtains carries out after full figure search obtains following the tracks of target, target in follow-up two field picture being likely to occur
Position is predicted, and controls imageing sensor to carry out partial exposure, obtains the image in estimation range, thus only to this local number
According to carrying out matching primitives, find impact point.And when target is not in estimation range, just can carry out the acquisition of entire image data
With search.The present invention is by predicting target zone and carrying out partial exposure with reference to imageing sensor and only obtain estimation range data
Mode, reduces data acquisition amount and the matching primitives amount of image, improves the tracking velocity to target and precision.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation describes it is clear that described embodiment is only a part of embodiment of the present invention, rather than whole embodiments.It is based on
Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of not making creative work
Embodiment, broadly falls into the scope of protection of the invention.
As shown in figure 1, a kind of method of image information acquisition of the present invention, including step:
S1, control imageing sensor that overall exposing region is exposed, extract exposure image data and carry out full figure and search
Rope obtains impact point, based on this impact point, sets the window ranges comprising impact point, and this window ranges is next frame
View data scope to be obtained, position in imageing sensor overall exposing region for the set window of record.
Control imageing sensor that overall exposing region is exposed in step S1, this imageing sensor can adopt electric charge
The imageing sensor of coupled apparatus (CCD) or complementary metal oxide semiconductors (CMOS) (CMOS) type is carried out.Image acquisition areas are entered
During row IMAQ, the exposure image that definition imageing sensor is exposed obtaining to overall exposing region is entire image.
Extract this frame entire image data, carry out full figure search, search meets the impact point of feature.This feature can be figure
Color in picture, shape or three-dimensional data etc..And impact point refers to user's point interested, as will be tracked
Target.It is circular point as through signature search, found impact point to be followed the tracks of, or be the maximum point of area, reflective
Point etc..After finding this impact point, based on this impact point, set the initial window ranges comprising impact point, should
Window ranges are next frame view data scope to be obtained, and that is, in next two field picture, prediction impact point appears in this window
In the range of mouthful.The window's position setting is recorded, and this window's position refers in imageing sensor overall exposing region
Position.
Fig. 2 a-2c is the initial process schematic diagram obtaining and following the tracks of target.Fig. 2 a represents imageing sensor to overall exposing area
Domain is exposed, and extracts the data of the entire image obtaining.There is the characteristic point such as square, circular in this view data.It is assumed that such as
Shown in Fig. 2 b, impact point to be followed the tracks of is circular point P, through full figure data search, after characteristic matching, have found mesh
Punctuate P.After finding this impact point, set initial window ranges, as shown in Figure 2 c, the square-shaped frame that dotted line represents is
The home window scope setting, the window's position point centered on impact point P.Certainly, window shape is not limited in square, also
Can be triangle, rectangle etc., and the initial the window's position setting also can not centered on impact point point, but there is it
Its relative position relation.And the size of window may be referred to object empirical motion speed and set, such as according to object experience
Movement velocity, impact point is only possible to 10 pixels of displacement between two field pictures, and therefore, we can arrange window size and are
21*21 pixel size.
S2, control imageing sensor are exposed to the window's position region recorded in exposure area, extract and expose
The view data in window ranges arriving.
In this step, the interface that provided by hardware, control imageing sensor only to the window recorded in exposure area
The band of position is exposed, and extracts exposure and obtains the data in window ranges.Due to being controlled to imageing sensor, only carry out
The exposure of wicket scope, obtains the image of small range, and data acquisition amount reduces, and the data processing amount of every frame accordingly reduces, often
The transmission speed of frame data is also improved.
When every frame data collection capacity reduces, the transmission demand of a frame image data will be reduced, thus in image transmitting
Middle can reach higher frame number.In exporting in video image, due to every frame obtain be entire image data, its highest adopts
Collection frame per second is 30 frames per second.And only obtaining the view data of small range, in the case of reducing every frame data collection capacity it is possible to
Reach 100 multiframes per second, the frame number of even more high.
Additionally, under same IMAQ frame per second, because every frame data need data volume to be processed to tail off, data output
Speed improve, then the processing speed of data can be become faster.Processing speed then allows in whole data handling procedure faster
In, temporal distribution is more freely so that can be processed to data using advanced algorithm further.As to video figure
In the processing procedure of picture, for the process of its 30 two field picture per second, often because not making to process to the smoothing processing of image
Effect is bad, but increases the problem that the number of times to picture smooth treatment can produce delay again.And only obtaining the image of small range
Data, in the case of improving data processing speed, because time distribution is more freely it is possible to increase to picture smooth treatment
Number of times, thus reach more preferable treatment effect.
S3, feature extraction is carried out to the data in window ranges, matching primitives whether there is impact point;If otherwise redirect holding
Row step S1, if then execution step S4.
In this step, the data for the window ranges obtaining carries out feature extraction, such as the gray scale of image, length-width ratio, figure
Coordinate of picture etc., after extracting feature, carries out characteristic matching these features with the impact point in previous frame view data.?
Joining algorithm is some algorithms commonly used in the art, such as according to the intensity correlation matching of image, the form fit degree of image, three-dimensional sits
Mark distance of two interframe etc..Through characteristic matching, find this frame and matching degree highest point in previous frame, this point is impact point
P.If through characteristic matching, not finding impact point, such as impact point is a reflective spot, and this point is that image acquisition areas are uniquely anti-
Luminous point, so, when impact point is not in the window ranges of prediction, processes through characteristic matching and just cannot find impact point.
When impact point cannot be found, the interface that provided by hardware, control imageing sensor that overall exposing region is entered
Row exposure, extracts the data of the entire image that exposure obtains, and carries out full figure search and again finds impact point, and is based on this target
Point, resets window ranges, that is, return execution step S1.
And when feature extraction coupling is carried out to the view data in the window ranges predicted, find impact point to be followed the tracks of
When, then continue executing with step S4.
Whether the impact point obtaining in S4, calculation procedure S3, in the position of window area, judges between impact point and window
Meeting default position relationship, if then directly returning execution step S2, if otherwise according to default window adjustable strategies, adjusting
The window's position after the window's position, and the adjustment of more new record, is then back to execution step S2.
In this step, for the impact point finding in step S3, calculate it in imageing sensor overall exposing region
Position, unit is pixel.Judge whether to meet default position relationship between impact point and window, if so, just direct return is held
Row step S2, now, in next frame image data of collection, controls imageing sensor the window's position region to be exposed
Constant;If it is not, when being unsatisfactory for default position relationship between impact point and window, then according to default window adjustable strategies,
Adjustment window, that is, change in next frame image data of collection, controls imageing sensor the window's position area to be exposed
Domain.
For " judging whether meet default position relationship between impact point and window, if otherwise according to default window
Adjustable strategies, adjust window ", two kind concrete situations are set forth below and illustrate, other similar situations can be by that analogy.
The first:Judge whether impact point is located at window center, if otherwise adjusting the window's position, adjustable strategies are to make mesh
The center of window after adjustment position for the punctuate.As shown in figure 3, square-shaped frame represent be obtain present frame wicket model
Enclose view data.Through feature extraction coupling is carried out to the data in window, find impact point P.In the present embodiment, square
The size of window is 20*20 pixel size.Before not adjusting window, the window upper left corner and the lower right corner are located at imageing sensor respectively
(300,100) and (320,120) on overall exposing region, impact point P is then located on imageing sensor overall exposing region
(316,106), unit is pixel.After testing, impact point P is not located at the center of wicket, therefore, need to adjust window position
Put.Its upper left corner of the window's position after adjustment and bottom right angular coordinate are respectively (306,96) and (326,116).
Second:Whether the detection pixel distance apart from edge of window edge for the impact point, judge this pixel distance more than default
Distance threshold L, if otherwise adjusting the window's position, adjustable strategies are to make impact point be located at the center of the window after adjustment.Default
Distance threshold L carries out respective settings according to the size of window and the frequency of adjustment.As shown in figure 4, what square-shaped frame represented is to obtain
The view data of the present frame wicket scope taking.Through feature extraction coupling is carried out to the data in window, find impact point
P.In the present embodiment, the size of square window is 21*21 pixel size, and default distance threshold L is 5 pixels.Do not adjust
Before whole window, (300,100) that the window upper left corner and the lower right corner are located on imageing sensor overall exposing region respectively and (321,
121) (316,106) that, impact point P is then located on imageing sensor overall exposing region, unit is pixel.After testing, mesh
The pixel distance on punctuate P edge on the right of window is 5 pixels, equal to default distance threshold L.According to default judgement bar
Part, needs the window's position is adjusted, the coordinate in its upper left corner of the window after adjustment and the lower right corner be respectively (306,96) and
(326,116), impact point P (316,106), the center of the window after adjustment.
When being adjusted to the position of window according to adjustable strategies, then update record window positional information, return execution
Step S2, that is, start, when gathering next frame image data, to control imageing sensor to the window recorded in overall exposing region
The mouth band of position carries out partial exposure, extracts the view data in the range of wicket, continues to the impact point in next two field picture
It is tracked.
In abovementioned steps S1 to S4, it is both needed to constantly monitor whether that abort signal arrives, when abort signal arrives, just
Terminate the whole flow process of image information acquisition, otherwise just according to step S1~S4 circulation execution whole flow process.
In object tracking process, the initial entire image that obtains carries out after full figure search obtains following the tracks of target, to subsequent frame
The position that in image, target is likely to occur is predicted, and controls imageing sensor to carry out partial exposure, obtains in estimation range
Image, thus only this local data is carried out with matching primitives, finds impact point.And when in target not in estimation range,
Acquisition and the search of entire image can be carried out.Therefore, the method that the present invention provides is passed through to predict the scope of target appearance and combine
Imageing sensor carries out the mode that partial exposure only obtains data in estimation range, reduce data acquisition amount and image
Join amount of calculation, improve the tracking velocity to target and precision.
As shown in Figure 5.A kind of system of image information acquisition of the present invention, including:
Image sensor cell, for obtaining exposure image;
Control unit, for controlling image sensor cell to the window recorded in overall exposing region or exposure area
The band of position is exposed, and when abort signal arrives, control system quits work;
Full figure search and processing unit, for when image sensor cell is exposed to overall exposing region, extracting
Expose the view data that obtains and carry out full figure search and obtain impact point, based on this impact point, set one and comprise impact point and exist
Interior window ranges;
Window data acquiring unit, in image sensor cell to the window's position region recorded in exposure area
When being exposed, extract the view data in the window ranges that exposure obtains;
Matching primitives and processing unit, for the position to the impact point that matching primitives and the matched calculating of processing unit find
Put and calculated, judge whether meet default position relationship between impact point and window, if otherwise being adjusted according to default window
Whole strategy, adjusts the window's position;
Coordinate calculates and the window's position adjustment unit, overall in imageing sensor for calculating impact point in current frame image
Position in exposure area, and according to the relation between aiming spot and the window's position, adjust the window's position, more new record is adjusted
The window's position after whole;
Window coordinates storage unit, for record window positional information, described the window's position is window in imageing sensor
Position in unit overall exposing region;And
Signal abort unit, for sending abort signal to control unit.
In system, image sensor cell can be charge-coupled image sensor (CCD) or complementary metal oxide semiconductors (CMOS)
(CMOS) imageing sensor of type.
When carrying out IMAQ, the control of controlled unit, image sensor cell exposes to overall exposing region
Light or only the local location in overall exposing region is exposed.
When system startup work, when gathering 1 two field picture, image sensor cell is exposed obtaining to overall exposing region
Take entire image, full figure search and processing unit extract this frame entire image data, carry out full figure search, and search meets feature
Impact point.This feature can be color in image, shape or three-dimensional data etc..And impact point refers to that user's sense is emerging
The point of interest, as target to be tracked.It is circular point as through signature search, found impact point to be followed the tracks of, and or
Person is the maximum point of area, reflective spot etc..After finding this impact point, based on this impact point, what setting one was initial comprises mesh
Punctuate is in interior window ranges.This window ranges is next frame view data scope to be obtained, that is, predict in next two field picture
Impact point appear in this window ranges.Wherein, the window ranges of setting can be square window, and the window's position is with mesh
Centered on punctuate.Certainly, window shape is not limited in square, can also be triangle, rectangle etc..The initial bit of window
Put also can not centered on impact point point, but there are other relative position relations.And the position of window refers to window in image
Position in sensor overall exposing region, the size of window may be referred to object empirical motion speed and set, such as basis
Object empirical motion speed, impact point is only possible to 10 pixels of displacement between two field pictures, and therefore, setting window size can be
21*21 pixel size.
Obtain entire image in full figure search and processing unit, carry out full figure and search element finding impact point, and be based on this target
Point, after setting initial the window's position, window coordinates this window position information of storage unit record.
After setting initial the window's position, for next frame (the 2nd frame) image to be gathered, control unit calls window
The window position information of record in coordinate storage unit, controls image sensor cell only to recorded in overall exposing region
The window's position region is exposed.Window data acquiring unit extracts the view data in the window ranges that exposure obtains.
Because controlled unit controls, image sensor cell only carries out the exposure of wicket scope, obtains small range
Image, data acquisition amount decreases, and the data processing amount of every frame accordingly reduces, and the transmission speed of every frame data is also carried
High.
When every frame data collection capacity reduces, the transmission demand of a frame image data will be reduced, thus in image transmitting
Middle can reach higher frame number.In exporting in video image, due to every frame obtain be entire image data, its highest adopts
Collection frame per second is 30 frames per second.And only obtaining the view data of small range, in the case of reducing every frame data collection capacity it is possible to
Reach 100 multiframes per second, the frame number of even more high.
Additionally, when under same IMAQ frame per second, because every frame data need data volume to be processed to tail off, data
The speed of output improves, then the processing speed of data can be become faster.Processing speed then allows in whole data processing faster
During, temporal distribution is more freely so that can be processed to data using advanced algorithm further.As to regarding
In the processing procedure of frequency image, for the process of its 30 two field picture per second, often because not making to the smoothing processing of image
Treatment effect is bad, but increases the problem that the number of times to picture smooth treatment can produce delay again.And only obtaining small range
View data, in the case of improving data processing speed, because time distribution is more freely it is possible to increase to image smoothing
The number of times of reason, thus reach more preferable treatment effect.
The view data extracted for window data acquiring unit, matching primitives and processing unit carry out feature to data and carry
Take, such as the gray scale of image, length-width ratio, coordinate of image etc., after extracting feature, these features and previous frame view data
In impact point carry out characteristic matching.Matching algorithm is some algorithms commonly used in the art, such as related of the gray scale according to image
Join, the form fit degree of image, distance of three-dimensional coordinate two interframe etc..Through characteristic matching, find in this frame and previous frame
Degree of joining highest point, this point is impact point.
If in characteristic matching, the 2nd two field picture collecting, finding impact point, that is, impact point is in the window model of prediction
In enclosing, judge whether meet default position relationship between impact point and window, if so, then the window's position is constant;If it is not, then root
According to default window adjustable strategies, adjust the window's position, and update the window position information of window coordinates storage unit preservation.
Now, to next frame (the 3rd frame) view data to be collected, and follow-up frame image data, if through to image
Data is made characteristic matching and is processed, and all can find impact point, and that is, in the window ranges of prediction, then this two field picture is whole for impact point
Processing procedure is identical with the process of the 2nd two field picture.In the view data collecting, do not find target through characteristic matching
During point, then reset home window position.
Wherein, for " judge whether meet default position relationship between impact point and window, when being unsatisfactory for, according to
Default window adjustable strategies adjust window ", some concrete situations are set forth below and illustrate, other similar situations can be such
Push away.
The first:Judge whether impact point is located at window center, if it is not, then window adjustable strategies are, so that impact point is located at
The center of the window after adjustment position;
Second:Whether the detection pixel distance apart from edge of window edge for the impact point, judge this pixel distance more than default
Distance threshold, if it is not, then window adjustable strategies are, makes impact point be located at the center of the window after adjustment.
For this two kinds of window regulation methods, have a detailed description in the method for image information acquisition, no longer superfluous here
State.
But, if in characteristic matching, the 2nd two field picture collecting, not finding impact point, such as impact point is one anti-
Luminous point, and this reflective spot is the unique reflective spot in image acquisition areas, so, when impact point is not in the window ranges of prediction,
Through characteristic matching, this impact point just cannot be found to the view data in the range of this.Now, for next two field picture to be gathered
(the 3rd frame), control unit controls image sensor cell that overall exposing region is exposed, and carries out full figure search and again finds
Impact point, and it is based on this impact point, reset home window position.Now, to next frame (the 4th frame) picture number to be collected
According to, and follow-up frame image data, if processed through view data is made with characteristic matching, impact point can be found, that is, impact point exists
In the window ranges of prediction, then the whole processing procedure of this two field picture is identical with the process of the 3rd two field picture.Until the figure collecting
As, in data, when characteristic matching does not find impact point, then resetting home window position.
In system processing procedure, when signal abort unit is detected and sending abort signal, control unit control system
Quit work.
Can be seen that when carrying out target following from system processing procedure, the initial entire image that obtains carries out full figure search
After obtaining following the tracks of target, the position that target in follow-up two field picture is likely to occur is predicted, and is entered by controlling imageing sensor
Row partial exposure, obtains the image in estimation range, thus only this local data is carried out with matching primitives, finds impact point.And
When in target not in estimation range, just can carry out acquisition and the search of entire image data.The system is passed through to predict target
Scope and carry out, with reference to imageing sensor, the mode that partial exposure only obtains estimation range data, reduce data acquisition amount with
And the matching primitives amount of image, improve the tracking velocity to target and precision.
The foregoing is only the exemplary embodiment of the present invention, not thereby limit the scope of patent protection of the present invention, all
It is the equivalent structure or equivalent flow conversion made using description of the invention and accompanying drawing content, or be directly or indirectly used in it
The technical field of his correlation, is included within the scope of the present invention.