CN2682483Y - Interactive input control system based on images - Google Patents

Interactive input control system based on images Download PDF

Info

Publication number
CN2682483Y
CN2682483Y CN 200420043412 CN200420043412U CN2682483Y CN 2682483 Y CN2682483 Y CN 2682483Y CN 200420043412 CN200420043412 CN 200420043412 CN 200420043412 U CN200420043412 U CN 200420043412U CN 2682483 Y CN2682483 Y CN 2682483Y
Authority
CN
China
Prior art keywords
image
module
input control
img
interactive input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200420043412
Other languages
Chinese (zh)
Inventor
钟煜曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN 200420043412 priority Critical patent/CN2682483Y/en
Application granted granted Critical
Publication of CN2682483Y publication Critical patent/CN2682483Y/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

The utility model discloses an interactive input control system based on images, comprising an image sampling module, an image processing module, an image analysis module and a signal transformation module. Of which, the interactive input control system can also comprise an image composition module. In the utility model, the image sampling module can collect images and input them to the computer. The image sampling module can also comprise one or more picture input devices (take the pick-up head or camera as an example), which are used to collect the dynamic or static image of the users. The adoption of the utility model can realize real-time, interactive and agile input.

Description

A kind of interactive input control system based on image
Technical field
The utility model relates to the interactive input control system based on dynamic image, refers to a kind of interactive input control system that uses video camera or camera picked-up dynamic image especially.
Background technology
Traditional computing machine input control device mainly comprises keyboard, mouse, operating rod (containing all kinds of bearing circle, control handle, DDR etc.) and locating device (as simple ultrasonic positioning system, electromagnetic type positioning system etc.).But except that locating device, above-mentioned all input control devices all need the user directly to contact, therefore the operating space of having limited the user to a certain extent.
For traditional locating device, the user all needs to wear supporting with it sensor or witch ball when operation.For example, the optics capture device is exactly that special-purpose witch ball by some is provided is to the user, allow them be bundled in the key position of health, catch and analyze user's action then by some high-speed cameras, and then the three-dimensional coordinate and the direction in space of output user parts of body.Though above-mentioned optics capture device can provide high-precision data, it is costly, debugging is complicated, thereby the domestic consumer can't accept.
On the other hand, traditional computer game all allows the user go to play the part of or controls one or more virtual roles and play, so visually isolates between user and the virtual role.
At last, traditional computing machine input mode all adopts an image input device (for example camera or video camera), so its field range is narrow.
In view of this, providing a kind of interactive input control system based on image is necessity to overcome above-mentioned shortcoming in fact.
Summary of the invention
Fundamental purpose of the present utility model is to provide a kind of interactive input control system based on image, can realize real-time, interactive, input flexibly.
For achieving the above object, a kind of interactive input control system based on image of the utility model comprises image sampling module, image processing module, image analysis module, signal conversion module.In another embodiment of the utility model, described interactive input control system also comprises image synthesis unit.
In the utility model, the image sampling module can be carried out image acquisition and the image of gathering is imported computing machine.In embodiment of the utility model, the image sampling module is one or more image input devices (for example camera or video cameras), is used to gather user's dynamic or static image.
Described image processing module comprises three control modules, is respectively Zoom module (Resize module), color conversion (Color space conversion module) and noise reduction module (Noise reduction module).The function of image processing module is each frame in the view data of image sampling module collection is carried out convergent-divergent, color conversion and noise reduction.
Wherein, Zoom module is used for the resolution of the image of image sampling module collection is dwindled, to reduce the computational load of system; Color conversion is used for the color of the image after handling through Zoom module is changed, and transfers the function of GREY pattern to from the BGR pattern; Noise reduction module then is used for to carrying out noise reduction process through the image after the color conversion module processes, to remove unnecessary noise in the image.
Described image analysis module comprises four control modules, is respectively comparison module (Calculate differencemodule), threshold module (Threshold module), historical storage module (Update history module) and judge module.Described image analysis module is used for the view data after the image processing module processing is analyzed, and user's action is judged.
Wherein, the effect of described comparison module is to subtract each other by the present frame that image processing module is handled and the pixel one by one of previous frame image, obtains reflecting the image of this two two field pictures difference.Described threshold module is used for the image of aforementioned reflection two two field picture differences is carried out threshold process, obtains having only the image of black and white; Described historical storage module is used for black white image that described threshold module is produced to carry out in conjunction with (N is an integer, looks the value that concrete condition is determined N) through the image of aforementioned processing equally with the mode and the preceding N frame of AND-operation.Described judge module be used for to through historical storage module in conjunction with after image carry out region decision, promptly calculate the number percent that white pixel in each zone accounts for this zone total area.
The function of described signal conversion module is the percentage result that described historical storage module calculates is changed, so as to being converted into the input signal that computing machine can be discerned, the computer software that is installed on computing machine utilizes the interaction between this signal that has transformed realization user and the computing machine to control.
The image synthesis unit function is image information and virtual scene are superposeed, and produces synthetic virtual image as calculated.Described image synthesis unit comprises matching module (Match module), link block and laminating module.
Wherein, when described image sampling module only adopted an image input device to carry out image acquisition, described matching module will not be carried out; When the image sampling module adopted two image input devices to carry out image acquisition, system carried out this module.The principle of work of this matching module following (is example with two image input devices):
At first, described matching module is at first sampled to the image that first image input device is gathered, and supposes the abstract image sample; Utilize this sample in the image that second image input device is gathered, to search then, and obtain the similar image pattern of sample therewith, and export the coordinate information of this image pattern region.
In the utility model, described link block is used to connect the image that a plurality of image input device is gathered, and this module has only after matching module is performed just effective.The principle of work of this module following (is example with aforementioned two image input devices):
At first, link block is analyzed the brightness of aforementioned two images, obtains the mean value of these two brightness of image, utilizes this mean value that the brightness of these two images is set respectively then; Then utilize the coordinate information of the resulting image pattern of matching module region to connect two images, wherein image coincides with on the image with the form of partial stack, and the place of two picture registrations is the most similar zone.
In the utility model, laminating module is used for the image that virtual image that computing machine is produced and image sampling module gathered and carries out overlap-add procedure, and the virtual image that computing machine produced is positioned on the image that the image sampling module gathered.When the image sampling module adopted two above image input devices to carry out image acquisition, the image after laminating module is then handled virtual image that computing machine produced and described link block carried out overlap-add procedure.
By this computer system, the method of carrying out Computer Image Processing is: make a camera or video camera that user or scene are continued to take, process is by computer acquisition, obtain the image of user or scene, this image is stored in the calculator memory, suppose called after Img0, the color of image can be colour or gray scale.Make Zoom module in the image processing module reduce the resolution of the image Img0 that camera or video camera gather, be reduced into full-sized 1/2 or 1/4 even littler as the case may be,, suppose to save as Img1 so as to reducing the operand of computing machine; Make a color modular converter image processing module that the image Img1 after dwindling is carried out the conversion of color space, image is transferred to the GREY gray scale from the BGR color mode, suppose to save as Img2, then can ignore for gray level image during as if computer acquisition than step; Make the noise reduction module in the image processing module that Img2 is carried out noise reduction process, so as to reducing unpredictable noise by image is originated or environment produced.Make the comparison module in the image analysis module that the present frame (being assumed to be Img2_current) through above same processing is subtracted each other comparison with former frame (being assumed to be Img2_pre), by to the additive operation between the pixel of two two field pictures, so as to obtaining their difference part, suppose that the result is Img3; Make the threshold module in the image analysis module that the image I mg3 after subtracting each other is carried out threshold value (Threshold) processing, produce black and white monochrome image Img4.Make the historical storage module in the image analysis module store this black white image Img4 in the history library Img_history of a record monochrome, this history library depends on actual need with the same black white image of handling of process of the synthetic some before of AND operation, for example preceding 15 frames; Make the judge module in the image analysis module that image I mg_history is cut, the piece number that depends on actual need and segment cutting, and add up the number percent that every middle white pixel accounts for this region area respectively, when this number percent surpasses some, make signal conversion module output control response signal, for example advance, retreat, move to left, move to right or the like.When the images acquired of native system is when gathering by two cameras or video camera, the image I mg0_2 that makes another camera or video camera gather carries out the as above same treatment of step, make the matching module in the image synthesis unit that image I mg0 is sampled then, sample from image I mg0 near any one rectangular area in four image borders, suppose that the result is Img0_sample, this regional extracting position is looked the needs of synthetic image and is determined; Utilize Img0_sample in Img0_2, to search the zone similar to Img0_sample, and obtain the coordinate information on four summits of this similar area, pick up one of them value according to actual conditions, when if image 1 is connected with image about 2, this value is value maximum on the X coordinate, and this coordinate figure has two pairs, can randomly draw wherein a pair of, be assumed to be (Matched_x, Matched_y).Make the link block in the image synthesis unit, get the width of X coordinate from Matched_x to Img0_2 among the Img0_2, the parts of images Img0_2_1 of the height of Y coordinate from 0 to Img0_2, the entire image of getting Img0 is connected with Img0_2_1, forms new image I mg_combined.Make the laminating module in the image synthesis unit that virtual image and the Img_combined that computing machine produced superposeed, and become final demonstration output effect.
When the images acquired of native system is when gathering by a camera or video camera, directly make the laminating module in the image synthesis unit that virtual image and the Img0 that computing machine produced superposeed, and become final demonstration output effect.
After adopting the utility model, owing to adopt camera to make capture device, the user can do random action in the effective range of camera or video camera shooting, compare with traditional input equipment, the user does not need directly to contact with hardware, and input mode is flexible, and is provided with simple.
Secondly, the utility model is changed the form of traditional pure virtual interacting, and has changed traditional input mode visually virtual scene and the truly isolation sense between the role; The utility model can be realized interaction mode on the spot in person, and allowing the user visually experience the he or she has become a one's share of expenses for a joint undertaking in the recreation, the object interaction in virtual spacetime.For example, make the user when playing, can catch and analyze its limb action by camera, finally become the input control signal that recreation can be discerned, and user's appearance can be presented in the recreation also, so deepen the interest of player's degree of input and recreation greatly.
Once more, the utility model can be realized handling in real time on machine with low cost, promptly has real-time.And,, safeguard simple so do not need the professional to safeguard because devices needed is common apparatus.
At last, the utility model can adopt many image input devices (for example camera or video camera), therefore makes its field range more wide.
Description of drawings
Fig. 1 is the physical module figure of the interactive input control system of the utility model;
Fig. 2-4 carries out the schematic diagram of image overlay for the utility model;
Fig. 5 is the interactive input control system workflow diagram of the utility model;
Fig. 6 is the image acquisition synoptic diagram of the interactive input control system of the utility model when adopting the single image input equipment;
Fig. 7 is the image acquisition synoptic diagram of the interactive input control system of the utility model when adopting the single image input equipment;
Fig. 8 is the synoptic diagram of the shank picture catching of an embodiment of the interactive input control system of the utility model.
Embodiment
Below in conjunction with accompanying drawing the utility model is described further.
As shown in Figure 1, a kind of interactive input control system based on image of the utility model comprises image sampling module 1, image processing module 2, image analysis module 3, signal conversion module 4.In another embodiment of the utility model, described interactive input control system also comprises image synthesis unit 5.
In the utility model, image sampling module 1 can be carried out image acquisition and the image of gathering is imported computing machine.In embodiment of the utility model, image sampling module 1 is one or more image input devices (for example camera or video cameras), is used to gather user's dynamic or static image.
Described image processing module 2 comprises three control modules, is respectively Zoom module 21 (Resizemodule), color conversion 22 (Color space conversion module) and noise reduction module 23 (Noisereduction module).Each frame in the view data that the function of image processing module 2 is image sampling module 1 is gathered carries out convergent-divergent, color conversion and noise reduction.
Wherein, Zoom module 21 is used for the resolution of the image of image sampling module 1 collection is dwindled, to reduce the computational load of system; Color conversion 22 is used for the color of the image after handling through Zoom module 21 is changed, and transfers the function of GREY pattern to from the BGR pattern; Noise reduction module 23 then is used for the image after handling through color conversion 22 is carried out noise reduction process, to remove unnecessary noise in the image.
Described image analysis module 3 comprises four control modules, is respectively comparison module 31 (Calculatedifference module), threshold module 32 (Threshold module), historical storage module 33 (Updatehistory module) and judge module 34.Described image analysis module 3 is used for the view data after image processing module 2 processing is analyzed, and user's action is judged.
Wherein, the effect of described comparison module 31 is to subtract each other by the present frame that image processing module 2 is handled and the pixel one by one of previous frame image, obtains reflecting the image of this two two field pictures difference.Described threshold module 32 is used for the image of aforementioned reflection two two field picture differences is carried out threshold process, obtains having only the image of black and white; Described historical storage module 33 is used for black white image that described threshold module 32 is produced to carry out in conjunction with (N is an integer, looks the value that concrete condition is determined N) through the image of aforementioned processing equally with the mode and the preceding N frame of AND-operation.Described judge module 34 be used for to through historical storage module 33 in conjunction with after image carry out region decision, promptly calculate the number percent that white pixel in each zone accounts for this zone total area.
The function of described signal conversion module 4 is the percentage result that described historical storage module 33 calculates is changed, so as to being converted into the input signal that computing machine can be discerned, the computer software that is installed on computing machine utilizes the interaction between this signal that has transformed realization user and the computing machine to control.
Image synthesis unit 5 functions are image information and virtual scene are superposeed, and produce synthetic virtual image as calculated.Described image synthesis unit 5 comprises matching module 51 (Match module), link block 52 and laminating module 53.
Wherein, when described image sampling module 1 only adopted an image input device to carry out image acquisition, described matching module 51 will not be carried out; When image sampling module 1 adopted two image input devices to carry out image acquisition, system carried out this module.With reference to figure 2, the principle of work of this matching module 51 following (is example with two image input devices):
At first, with reference to figure 2, described matching module 51 is at first sampled to the image 80 that first image input device is gathered, and supposes abstract image sample 95; Utilize this sample 95 in the image 82 that second image input device is gathered, to search then, and obtain the similar image pattern 97 of sample therewith, as shown in Figure 3, and export the coordinate information of these image pattern 97 regions.
In the utility model, described link block 52 is used to connect the image that a plurality of image input device is gathered, and this module has only after matching module 51 is performed just effective.The principle of work of this module following (is example with aforementioned two image input devices):
At first, the brightness of 52 pairs of two images 80,82 shown in Figure 3 of link block is analyzed, and obtains the mean value of these two image 80,82 brightness, utilizes this mean value that the brightness of these two images 80,82 is set respectively then; Then utilize the coordinate information of matching module 51 resulting image patterns 97 regions to connect two images 80,82, wherein image 80 coincides with on the image 82 with the form of partial stack, two images 80,82 places that overlap are the most similar zone, as shown in Figure 4.
In the utility model, laminating module 53 is used for the image that virtual image that computing machine is produced and image sampling module 1 gathered and carries out overlap-add procedure, and the virtual image that computing machine produced is positioned on the image that image sampling module 1 gathered.When image sampling module 1 adopted two above image input devices to carry out image acquisition, the image after 53 of laminating modules are handled virtual image that computing machine produced and described link block 52 carried out overlap-add procedure.
As shown in Figure 5, the utility model is as follows based on the workflow of the interactive input control system of image:
Step 100: carry out image acquisition; In the utility model, can carry out image acquisition by one or more image input devices.
Step 200: carry out Flame Image Process; In embodiment of the utility model, described step 200 comprises three sub-processes, is respectively: 1. each frame in the view data of gathering is carried out convergent-divergent and handle, and step 211, the resolution that is about to images acquired is dwindled, to reduce the function of system's computational load; 2. scaled images is carried out color conversion process, step 212, the color space that is about to the image after convergent-divergent is handled is changed, to realize the flow path switch from the BGR pattern to the GREY pattern; 3. the image after the color conversion is carried out noise reduction process, step 213 is to remove unnecessary noise in the image.
Step 300: carry out graphical analysis.In embodiment of the utility model, described step 300 comprises four sub-processes, be respectively: 1. the image pair after above-mentioned processing compares processing, step 311, promptly present frame after the Flame Image Process and previous frame image are carried out subtracting each other of pixel one by one, obtain the difference of this two two field picture; 2. the image that comparison process is obtained carries out threshold process, and step 312 obtains having only the image of black and white; 3. N frame (N is an integer, looks the value that concrete condition is determined N) is carried out combination through the image of aforementioned processing, step 313 will soon be carried out combination through the N frame black white image that aforementioned processing produced in the mode of AND-operation; 4. the image after the aforementioned combination is carried out region decision, step 314 is promptly looked concrete condition, and the number percent that white pixel in each zone is accounted for this zone total area calculates.
Step 400: the region decision result is carried out conversion of signals, be converted to the discernible signal of computing machine; Computer software utilizes the interaction control between this signal that has transformed realization user and the computing machine.
Step 500: carry out image and synthesize.The virtual scene that this flow process can be produced the image information and the computing machine of image input device collection and processing is sewed up in real time.
Carry out image synthetic before, also comprise step 450 in the workflow of the interactive input control system of the utility model, judge that promptly whether image input device in the image sampling module is above one.If when only adopting a graphic input device to carry out image acquisition, then system directly superposes the virtual image of computing machine generation and the image of collection, i.e. step 513.If graphic input device is a plurality of, then system matches to the image of a plurality of image input device collections, step 511, and connect the image that a plurality of image input devices are gathered, step 512, at last virtual image and the image after described connection processing that computing machine produced carried out overlap-add procedure, step 513.
By above-described each flow process, a kind of interactive input control method based on image has been finished like this.
Further the utility model is elaborated below in conjunction with specific embodiment, so that to the purpose of this utility model, feature and advantage are carried out more deep understanding.
Embodiment one
With reference to figure 6, the image acquisition synoptic diagram that shown is when the utility model adopts single camera or video camera 11 (image input device).
User 612 need stand in the visual range of camera or video camera 11, also can only take above the waist, and captured visual range is decided on concrete application.User 612 only need face camera or video camera 11 when using the utility model system, do corresponding action by indication.
In the utility model, at first 11 couples of users of camera or video camera 612 carry out image acquisition, and the image resolution ratio of collection is 640*480, and color depth is 24, and frame speed is 30FPS, and current frame image saves as Img_capture.
With reference to figure 1 and Fig. 5, the resolution of the image I mg_capture that 21 pairs of cameras of Zoom module or video camera 11 are gathered is dwindled, and is of a size of 320*240 after dwindling, thereby system has reduced by 3/4 operand, and image storage is Img_resized; After the convergent-divergent processing, color conversion 22 can be converted to the GREY grayscale mode to the color depth of image I mg_resized from the BGR color mode, since the needed memory headroom of image of GREY grayscale mode be same size BGR color mode image 1/3, thereby reduce the treatment of picture amount again, image storage is Img_grey; Then, image I mg_grey after 23 pairs of conversions of noise reduction module carries out noise reduction, the method of noise reduction mainly is by reducing sampling ratio and improving sampling and recently realize, also can realize by Gaussian Blur, to reduce camera or video camera 11 or unpredictable noise that environment was produced, thereby the error in the minimizing graphical analysis, image storage are Img_smooth.
Then, above same present frame Img_smooth_current and the former frame Img_smooth_last that handles of 31 pairs of processes of comparison module subtracts each other comparison, by the additive operation between pixel one by one to two two field pictures, and adopt absolute value to handle to subtracting each other the result, so as to obtaining the difference (Difference) between them, the scope of the value of each pixel is 0-255, and the image storage after subtracting each other is Img_diff.Then, the image I mg_diff after 32 pairs of threshold module are subtracted each other carries out threshold value (Threshold) to be handled, and this module is 0 to being less than 10 value in each pixel, and the value more than or equal to 10 then is 1, so as to producing black white image Img_bw.Next step, historical storage module 33 stores this black white image Img_bw in the history library Img_history of a record monochrome, and this history library is with the same black white image of handling of the synthetic preceding 0.3 second process of AND operation, just 0.3*30FPS=9 frame.At last, 34 couples of image I mg_history of judge module cut, and picture on average is divided into four rectangular areas, add up the ratio of this shared region area of white pixel in each zone respectively.
Then, each regional ratio result of 4 pairs of above-mentioned judge module 34 statistics of signal conversion module changes, if this ratio surpasses 30%, then the program response top left region is " 7 " key of computer small keyboard; Correspondingly, the program response right regions is computer small keyboard " a 9 " key; Zone, program response lower-left is " 1 " key of computer small keyboard; The program response lower right area is computer small keyboard " a 3 " key.
Finally, the utility model synthesizes by virtual image and the image I mg_capture that 53 pairs of computing machines 611 of laminating module are produced, and outputs in the display device of computing machine 611.
Embodiment two
Image acquisition synoptic diagram when shown in Figure 7 is two cameras of the utility model employing or video camera 11,12 (image input device).Wherein two cameras 11,12 are placed on respectively on the same vertical line of differing heights, and shooting direction is separately held certain angle, and keep certain included angle.Present embodiment is that example describes to take user's 612 whole bodies.
Camera 11 images acquired also save as Img_cam1, and camera 12 images acquired also save as Img_cam2.The camera 11 main upper part of the body of taking user 612, and the camera 12 main lower parts of the body of taking user 612.
The image resolution ratio of supposing above-mentioned collection is 320*240, and color depth is 24, and frame speed is 25FPS.Image I mg_cam1 that 21 pairs of cameras of Zoom module in the image processing module 2 are gathered and the resolution of Img_cam2 are dwindled, and minification is 160*120, and save as Img_cam1_sm and Img_cam2_sm; Color conversion 22 in the image processing module 2 is converted to the GREY grayscale mode to the color depth of image I mg_cam1_sm and Img_cam2_sm from the BGR color mode respectively, and saves as Img_cam1_sm1 and Img_cam2_sm1 respectively.
Image I mg_cam1_sm1 and Img_cam2_sm1 after 23 pairs of conversions of noise reduction module in the image processing module 2 handle, and save as identical variable name.
To describe the seizure principle of first camera 11 below, the seizure of 11 pairs of user's 612 heads of camera is just handled.
At first, a blank image Img_cam1_sm1_pre identical with color depth with the Img_cam1_sm1 size creates in system when moving for the first time.
Secondly, by 31 couples of same present frame Img_cam1_sm1 that handle more than the process of the comparison module in the image analysis module 3 and the comparison of previous frame Img_cam1_sm1_pre, draw different gray level image Img_cam1_diff.32 couples of gray level image Img_cam1_diff of threshold module in the image analysis module 3 carry out threshold value (Threshold) to be handled, and produces black and white monochrome image Img_cam1_bw, and preservation Img_cam1_sm1 is Img_cam1_sm1_pre.The first half of Img_cam1_bw is carried out lining by line scan from top to bottom, add up the quantity N_cam1 of the contained adularescent pixel of every row.If when N_cam1 surpasses total pixel (the total pixel on the horizontal line is 160) on the 1/4 current horizontal scan line, stop scanning.Write down the position Img_cam1_pos of current stop, this position is the current location of user's 612 heads.If it is Img_cam1_last that program then writes down Img_cam1_pos when moving for the first time, and does not carry out following program; As Img_cam1_pos during greater than Img_cam1_last, calculate the value that Img_cam1_pos deducts Img_cam1_last, and this subtract each other the back value bigger the time, obtain the height (for the image that camera is gathered) that user 612 jumps, so as to " jumping " input operation in the response response program.
To describe the seizure principle of second camera 12 below, the seizure of 12 pairs of user's 612 shanks of camera is just handled.
At first, a blank image Img_cam2_sm2_pre identical with color depth with the Img_cam2_sm2 size creates in system when moving for the first time.
Secondly, above same present frame Img_cam2_sm2 that handles of 31 pairs of processes of the comparison module in the image analysis module 3 and the comparison of Img_cam2_sm2_pre draw different gray level image Img_cam2_diff.32 couples of gray level image Img_cam2_diff of threshold module in the image analysis module 3 carry out threshold value (Threshold) to be handled, and produces black and white monochrome image Img_cam2_bw.Store this black white image Img_cam2_bw in the history library Img_cam2_history of a record monochrome, this history library depends on actual need with the synthetic image before of AND operation.Historical storage module 33 in the image analysis module 3 stores this black white image Img_cam2_bw in the history library Img_cam2_history of a record monochrome, and this history library is with AND operation synthetic 0.2 second image, i.e. 0.2*30FPS=6 frame before.
Img_cam2_history is analyzed, search the position of first and last two continuous white pixel that occur continuously in half place of the height of image, as shown in Figure 8, the position 83 of first two continuous white pixel that occur continuously of user's 612 a shown in Figure 8 leg 81 ' locate, another leg 82 of user 612 ' locate the position 84 of last two continuous white pixel that occur continuously, and save as Pos_begin and Pos_end.Distance between Pos_begin and the Pos_end can be rough two leg 81 '-82 seeing user 612 as ' width.Judge module 34 decision Pos_begin in the image analysis module 3 and the mid point between the Pos_end, this mid point can be distinguished user's left and right sides leg roughly.To its right and left two rectangular areas are set respectively with this mid point, Rect1 and Rect2, and calculate the total quantity of white pixel in this two zone.If when the total value of the white pixel of arbitrary rectangular area be 2 times of another regional white pixel total value, the leg that then can judge user 612 relevant positions was mobile.If be less than 0.5 second the interval time that left and right sides leg moves respectively, the state of then judging user 612 is for walking or race, so as to " race " input operation in the responder.
At last, extract on the position of matching module 51 below Img_cam1 leans in the image synthesis unit 5 one wide for the Img_cam1 width half, highly be a rectangular area of 3 pixels, and save as image I mg_cam1_sample.Then, matching module 51 is searched for the zone similar to sample Img_cam1_sample in Img_cam2, and obtains four corresponding coordinates in rectangle summit, gets a pair of in two groups of coordinate figures of Y coordinate figure maximum arbitrarily, and save as (Matched_x, Matched_y).The monochrome information of 52 couples of image I mg_cam1 of link block in the synthesis module 5 and Img_cam2 is analyzed then, obtains the mean value of the monochrome information of this two width of cloth image, utilizes this mean value that the brightness of image I mg_cam1 and Img_cam2 is set respectively.Next step be among entire image among 52 couples of image I mg_cam1 of link block in the synthesis module 5 and the Img_cam2 X coordinate from 0 width to image I mg_cam2, the image of the height of Y coordinate from Matched_y to Img_cam2 is connected, and saves as Img_combined.Therefore the width of Img_combined is the width (width of Img_cam1 equates with the width of Img_cam2) of Img_cam1, highly is: the height of Img_cam1+(height of Img_cam2-Matched_y).At last, virtual image and image I mg_combined that 53 pairs of computing machines of the laminating module in the image synthesis unit 5 produce superpose, and output in the display device of computer equipment.
In the utility model, more than said computing machine can be personal computer, image workstation, luggable computer, electronic game machine, portable game machine, personal digital assistant and mobile phone any device wherein.
The optical inducer of described camera is CMOS or CCD, and interface is USB or AV interface.Wherein, the advantage of CMOS chip is cheap, but refresh rate is lower, can only reach 10-20FPS (Frames Per Second when indoor daylight lamp is done key lighting, the frame number that per second shows), part adopts the camera of built-in image speed-up chip then can reach 18-25FPS.And the camera that adopts the CCD chip generally can be stablized and remains on more than the 25FPS, but the cost of CCD chip is far above the CMOS chip, so generally have only the camera of middle and high end just can adopt the CCD chip as its optical inducer.Therefore, the user can be selected as required, but selecting for use the camera that is used for family's capture system just must selection can reach more than the 20FPS, to guarantee the fluency of picture, reduces lagging behind.
Above disclosed only is preferred embodiment of the present utility model, can not limit the interest field of the utility model certainly with this, and therefore the equivalent variations of being done according to the utility model claim still belongs to the scope that the utility model is contained.

Claims (10)

1. the interactive input control system based on image comprises image sampling module, image processing module, image analysis module, signal conversion module.
2. according to right 1 described interactive input control system, it is characterized in that: described system also comprises an image synthesis unit.
3. according to right 2 described interactive input control systems, it is characterized in that: described image sampling module can carry out image acquisition and with the image input computing machine of gathering, it comprises one or more image input devices.
4. according to right 3 described interactive input control systems, it is characterized in that: described image input device is camera or video camera.
5. according to right 2 described interactive input control systems, it is characterized in that: described image processing module comprises Zoom module (Resize module), color conversion (Color space conversion module) and three control modules of noise reduction module (Noise reduction module).
6. according to right 5 described interactive input control systems, it is characterized in that: described image analysis module comprises comparison module (Calculate difference module), threshold module (Threshold module), historical storage module (Update history module) and four control modules of judge module.
7. according to right 2 described interactive input control systems, it is characterized in that: described image synthesis unit comprises matching module (Match module), link block and laminating module.
8. according to right 7 described interactive input control systems, it is characterized in that: when described image sampling module only adopted an image input device to carry out image acquisition, described link block will not be carried out; When the image sampling module adopted two image input devices to carry out image acquisition, system carried out this module.
9. according to right 8 described interactive input control systems, it is characterized in that: described link block is used to connect the image that a plurality of image input device is gathered, and this module has only after matching module is performed just effective.
10. according to right 9 described interactive input control systems, it is characterized in that: if only when adopting a graphic input device to carry out image acquisition, then the virtual image that directly computing machine produced of system and the image of collection superpose; If graphic input device is a plurality of, then system matches to the image of a plurality of image input device collections, and connect the image that a plurality of image input devices are gathered, at last virtual image and the image after described connection processing that computing machine produced carried out overlap-add procedure.
CN 200420043412 2004-03-11 2004-03-11 Interactive input control system based on images Expired - Fee Related CN2682483Y (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200420043412 CN2682483Y (en) 2004-03-11 2004-03-11 Interactive input control system based on images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200420043412 CN2682483Y (en) 2004-03-11 2004-03-11 Interactive input control system based on images

Publications (1)

Publication Number Publication Date
CN2682483Y true CN2682483Y (en) 2005-03-02

Family

ID=34608001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200420043412 Expired - Fee Related CN2682483Y (en) 2004-03-11 2004-03-11 Interactive input control system based on images

Country Status (1)

Country Link
CN (1) CN2682483Y (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101641964B (en) * 2007-03-30 2012-01-18 独立行政法人情报通信研究机构 Mid-air video interaction device and its program
CN101661329B (en) * 2009-09-22 2015-06-03 北京中星微电子有限公司 Operating control method and device of intelligent terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101641964B (en) * 2007-03-30 2012-01-18 独立行政法人情报通信研究机构 Mid-air video interaction device and its program
CN101661329B (en) * 2009-09-22 2015-06-03 北京中星微电子有限公司 Operating control method and device of intelligent terminal

Similar Documents

Publication Publication Date Title
KR100298240B1 (en) Information input device, information input method and solid state imaging device
CN108875633B (en) Expression detection and expression driving method, device and system and storage medium
JP3321053B2 (en) Information input device, information input method, and correction data generation device
US6697072B2 (en) Method and system for controlling an avatar using computer vision
JP3521187B2 (en) Solid-state imaging device
CN100487636C (en) Game control system and method based on stereo vision
JP3410919B2 (en) Image extraction device
WO2017033853A1 (en) Image processing device and image processing method
WO2012001755A1 (en) Information processing system, information processing device, and information processing method
US20040017472A1 (en) Method for video-based nose location tracking and hands-free computer input devices based thereon
CN109345635B (en) Virtual reality mixed performance system without mark points
CN103218506A (en) Information processing apparatus, display control method, and program
CN102222342A (en) Tracking method of human body motions and identification method thereof
CN1664755A (en) Video recognition input system
JP2010152556A (en) Image processor and image processing method
JP2004532441A5 (en)
CN2682483Y (en) Interactive input control system based on images
CN101079108A (en) DSP based multiple channel mechanical digital display digital gas meter automatic detection device
CN112330753B (en) Target detection method of augmented reality system
CN100456212C (en) Interactive inputting control method and system based on image
Le et al. Design and implementation of real time robot controlling system using upper human body motion detection on FPGA
CN1838032A (en) Interactive input control method based on computer image and pure color object
JP3607440B2 (en) Gesture recognition method
CN1862453A (en) Interactive image game system
RU2189628C2 (en) Remote control method

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20050302

Termination date: 20110311