CN201465138U - Motion recognition system - Google Patents

Motion recognition system Download PDF

Info

Publication number
CN201465138U
CN201465138U CN200920105373XU CN200920105373U CN201465138U CN 201465138 U CN201465138 U CN 201465138U CN 200920105373X U CN200920105373X U CN 200920105373XU CN 200920105373 U CN200920105373 U CN 200920105373U CN 201465138 U CN201465138 U CN 201465138U
Authority
CN
China
Prior art keywords
image
module
gray level
recognition
different colors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CN200920105373XU
Other languages
Chinese (zh)
Inventor
钟文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Pixel Software Technology Co Ltd
Original Assignee
Beijing Pixel Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Pixel Software Technology Co Ltd filed Critical Beijing Pixel Software Technology Co Ltd
Priority to CN200920105373XU priority Critical patent/CN201465138U/en
Application granted granted Critical
Publication of CN201465138U publication Critical patent/CN201465138U/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The utility model discloses a motion recognition system which comprises a training module, an image capturing module and a recognition module, wherein the training module is used for converting a training picture into a gray level image and a plurality of channel images of different colors, and respectively generating signature files corresponding to the gray level image and the channel images of different colors; the image capturing module is used for capturing motions and generating a capturing picture; and the recognition module is connected with the image capturing module and the training module and is used for converting the capturing picture generated by the image capturing module into a gray level image and a plurality of channel images of different colors, and respectively comparing the acquired gray level image and the channel images of different colors with the signature files corresponding to the gray level image and the channel images of different colors generated by the training module to acquire corresponding recognition results, thereby acquiring the final recognition result by integrating all recognition results. The system can use signature files of color images with weaker color intensity occupation for recognition in the color cast light environment, thereby improving the correctness of motion recognition in the color cast light environment.

Description

A kind of action recognition system
Technical field
The utility model relates to the action recognition field, particularly a kind of action recognition system.
Background technology
Action recognition is very popular in recent years research field, by image-capturing apparatus, finishes the identifying to human action in the short period of time, and is converted to the operational order of equipment such as computing machine; Thereby being used as a kind of effective input medium is applied in the application fields such as recreation, film making.
The problem that action recognition at first will solve is to find the position of human action, and the position of human action is the foundation of action recognition, is commonly referred to as " concern position ".Because pay close attention to exposed position such as position behaviour face, hand usually, its color has bigger difference with environment, clothes, so can separate with non-concern lane place paying close attention to the position on color.For paying close attention to determining of position, general employing is a foundation based on the color histogram of color distribution statistics, specifically, be that human action is captured as static image to be identified, according to the image zones of different (two regional center positions, size have any difference then these two zones be zones of different) color count its color histogram, then each regional statistic histogram and default histogram are made comparisons, seek the most similar zone as last concern position.
But it is very high that this method requires color, and people's dress clothes, surrounding environment and colour of skin close with the colour of skin is close, single or the like the factor of surround lighting tone all can cause discrimination to decline to a great extent.And this method can only obtain the position of paying close attention in which position of image, and can't obtain to pay close attention to the implication that embodies of position.
For eliminating the influence of color to image recognition, existing recognition technology is converted into gray-scale map to image to be identified earlier usually, then the gray-scale map that obtains is discerned.After image to be identified is converted into gray-scale map, need from integral body, search out the concern position with recognition system, for example in people's full-length picture, find face or hand through the training of artificial intelligence technologys such as neural network according to features such as profile lines trend, each several part position relations.
At present, (Open Computer Vision Library OpenCV) in the project, has adopted a kind of image recognition algorithm based on simple feature cascade (Cascade of Simple Features) in the computer vision storehouse of increasing income.Adopt the action recognition system of this algorithm generally need be to the training of picture sample, the generating feature file, catch the human action image by image-capturing apparatus afterwards, according to the tag file that generates the human action image of catching is discerned again, maybe be to discern behind the gray-scale map, obtain recognition result the human action image transitions of catching.
Because the working environment and the training environment of action recognition system may have bigger difference, effect is fine in the time of may training, once arriving practical identification error.And under colour cast surround lighting (color that is surround lighting is single) irradiation, the part of different colours, such as clothes and skin, can present much the same color, seem that just brightness has a little difference, even at this time coloured image is converted into gray-scale map, originally there was the part of obvious color differentiating also may become and is difficult to differentiate, therefore no matter be to use coloured image or gray-scale map to train and discern, under the illuminate condition of colour cast surround lighting, there is the problem that is not easy to distinguish the concern position in the capital, thereby the accuracy of identification is reduced.
The utility model content
The utility model embodiment provides a kind of action recognition system, can reduce the influence of colour cast surround lighting to action recognition, improves recognition correct rate.
For achieving the above object, the technical solution of the utility model specifically is achieved in that
A kind of action recognition system, this system comprises:
Training module is used for the training picture is converted to gray level image and a plurality of channel images with different colors, and generates and described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively;
Image capture module is used for capturing motion, generates to catch picture.
Identification module, link to each other with described image capture module and training module, be used for the picture of catching that described image capture module generates is converted to gray level image and a plurality of channel images with different colors, and the gray level image and a plurality of channel images with different colors characteristic of correspondence file that generate with described training module respectively compare, obtain corresponding recognition result, comprehensively obtain final recognition result according to all recognition results.
As seen from the above technical solutions, this action recognition of the present utility model system, to train picture to be converted to gray level image and a plurality of channel images with different colors by training module, and generate and described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively; The picture of image capture module being caught by identification module is converted to gray level image and a plurality of channel images with different colors, and compare with described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively, obtain corresponding recognition result, comprehensively obtain final recognition result according to all recognition results again, can under the colour cast luminous environment, use the more weak color image tag file of shared color intensity to discern, thereby improve the action recognition accuracy under the colour cast luminous environment.
Description of drawings
Fig. 1 is the action recognition system architecture synoptic diagram of the utility model embodiment;
Fig. 2 is the training module concrete structure synoptic diagram of the utility model embodiment;
Fig. 3 is the identification module concrete structure synoptic diagram of the utility model embodiment.
Embodiment
For making the purpose of this utility model, technical scheme and advantage clearer, below with reference to the accompanying drawing embodiment that develops simultaneously, the utility model is further described.
The utility model mainly is to be a plurality of different Color Channel image and gray level images with training image and image transitions to be identified, and train respectively and discern, the recognition result of last comprehensive channel images with different colors and gray level image gets recognition result to the end.
Its principle is: colour cast surround lighting generally be the light of certain color be main, the color of in fact full spectrum has, so also comprise shades of colour in the reflected light, if the color big without intensity performs an analysis, do identification and use the more weak color of strength ratio instead, just can distinguish different colors fully.Therefore the utility model does not have simple use gray-scale map, but being divided into a plurality of different Color Channel image and gray-scale maps, cromogram processes respectively, last comprehensive all the different Color Channel images and the recognition result of gray-scale map just can get recognition result to the end.
Fig. 1 is the action recognition system architecture synoptic diagram of the utility model embodiment, and as shown in Figure 1, this system comprises:
Training module 101 is used for the training picture is converted to gray level image and a plurality of channel images with different colors, and generates and described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively.
When the training picture is changed, except that the gray scale image, can also determine to be converted to the image of what different color channels according to concrete needs, for example can be converted to the image of red (R), green (G), blue (B) three chrominance channels, also can convert the image of other Color Channels to, the quantity of conversion is many more, and recognition speed is slow more.
Image capture module 103 is used for capturing motion, generates to catch picture.Image capture module can adopt realizations such as camera or camera.
Identification module 102, link to each other with described image capture module 103 and training module 101, be used for the picture of catching that described image capture module 103 generates is converted to gray level image and a plurality of channel images with different colors, and the gray level image and a plurality of channel images with different colors characteristic of correspondence file that generate with described training module 101 respectively compare, obtain corresponding recognition result, comprehensively obtain final recognition result according to all recognition results.
Training module 101 and identification module 102 can adopt equipment such as computer terminal, server to realize.
Wherein, the concrete structure of training module as shown in Figure 2, Fig. 2 is the training module concrete structure synoptic diagram of the utility model embodiment, preferably, training module can comprise:
Unit 201 is demarcated in action, is used for the position in the fixed action to be identified of training picture acceptance of the bid.
The position is just paid close attention in the position of action to be identified.
Training picture converting unit 202 links to each other with described demarcation unit, is used for the training picture that indicates operating position to be identified of described demarcation unit output is converted to gray level image and a plurality of channel images with different colors;
Sample creating unit 203 links to each other with described converting unit, is used for gray level image and a plurality of channel images with different colors of described converting unit conversion are created corresponding training sample respectively;
Can use the createsample instrument of OpenCV to create training sample, for example when creating the training sample of gesture 1, the training picture of gesture 1 is positive sample, and the training picture of other gesture is a negative sample.The training sample creation method of other gestures is identical, and the like.
Tag file unit 204 links to each other with described sample creating unit, and the training sample of creating according to described sample creating unit generates and described gray level image and a plurality of channel images with different colors characteristic of correspondence file.
Can use the haartraining instrument of OpenCV to generate corresponding Haar feature cascade classifier file, just tag file to each training sample.
Preferably, if will train picture to be converted to the image and the gray level image of red (R), green (G), blue (B) three chrominance channels, described training picture converting unit 202 comprises:
R changes subelement, demarcates unit 201 with described action and links to each other, and is used for the training picture that indicates operating position to be identified of described demarcation unit output is converted to the red channel image;
G changes subelement, demarcates unit 201 with described action and links to each other, and is used for the training picture that indicates operating position to be identified of described demarcation unit output is converted to the green channel image;
B changes subelement, demarcates unit 201 with described action and links to each other, and is used for the training picture that indicates operating position to be identified of described demarcation unit output is converted to the blue channel image;
The gradation conversion subelement is demarcated unit 201 with described action and is linked to each other, and is used for the training picture that indicates operating position to be identified of described demarcation unit 201 outputs is converted to gray level image;
Correspondingly, described sample creating unit 203 is created respectively and red, green, blueness and the corresponding training sample of gray scale channel image, specifically comprises:
R creates subelement, links to each other with described R conversion subelement, is used to create the training sample that described R changes the red channel image of subelement output;
G creates subelement, links to each other with described G conversion subelement, is used to create the training sample that described G changes the green channel image of subelement output;
B creates subelement, links to each other with described B conversion subelement, is used to create the training sample that described B changes the blue channel image of subelement output;
Gray scale is created subelement, links to each other with described gradation conversion subelement, is used to create the training sample of the gray scale channel image that described gradation conversion subelement exports.
Correspondingly, described tag file unit 204 generates the training sample characteristic of correspondence file with red, green, blueness and gray scale channel image respectively, specifically comprises:
R feature subelement is created subelement with described R and is linked to each other, and is used to generate the training sample characteristic of correspondence file that described R creates the red channel image of subelement establishment;
G feature subelement is created subelement with described G and is linked to each other, and is used to generate the training sample characteristic of correspondence file that described G creates the green channel image of subelement establishment;
B feature subelement is created subelement with described B and is linked to each other, and is used to generate the training sample characteristic of correspondence file that described B creates the blue channel image of subelement establishment;
The gray feature subelement is created subelement with described gray scale and is linked to each other, and is used to generate the training sample characteristic of correspondence file that described gray scale is created the gray scale channel image of subelement establishment.
Fig. 3 is the action recognition module concrete structure synoptic diagram of the utility model embodiment, and as shown in Figure 3, identification module comprises:
Catch picture converting unit 301, link to each other, be used for the picture of catching that described image capture module generates is converted to gray level image and a plurality of channel images with different colors with described image capture module 103;
Characteristics comparing unit 302, link to each other with institute training module 101, be used for described gray level image and a plurality of channel images with different colors of catching the conversion of picture converting unit compared with gray level image and a plurality of channel images with different colors characteristic of correspondence file that described training module generates respectively, obtain corresponding recognition result;
Comprehensive recognition unit 303 links to each other with described characteristics comparing unit 302, is used for comprehensively obtaining final recognition result according to all recognition results that described characteristics comparing unit obtains.
Preferably, will to train picture to be converted to the image and the gray level image of red (R), green (G), blue (B) three chrominance channels corresponding with training module 101, catches picture converting unit 801 and comprise:
R passage subelement links to each other with described image capture module 103, and the image transitions that is used for that described image capture module is caught is the red channel image;
G passage subelement links to each other with described image capture module 103, and the image transitions that is used for that described image capture module is caught is the green channel image;
B passage subelement links to each other with described image capture module 103, and the image transitions that is used for that described image capture module is caught is the blue channel image;
Gray scale passage subelement links to each other with described image capture module 103, and the image transitions that is used for that described image capture module is caught is the gray scale channel image;
Correspondingly, described characteristics comparing unit 302, be used for described gray level image and red, green, blue three chrominance channel images of catching 301 conversions of picture converting unit are compared with gray level image and the red, green, blue three chrominance channel image characteristic of correspondence files that described training module 101 generates respectively, obtain corresponding recognition result; Can be respectively when obtaining recognition result to red, green, blue three chrominance channel images and gray level image totally 4 images conduct a survey with the Haar sorter of each gesture correspondence, obtain recognition result.Specifically comprise:
R is subelement relatively, links to each other with described R passage subelement and training module, is used for that R is changed the red channel image of subelement conversion and red channel tag file that described training module is exported compares, and obtains the recognition result of red channel.
G is subelement relatively, links to each other with described G passage subelement and training module, is used for that G is changed the red channel image of subelement conversion and green channel tag file that described training module is exported compares, and obtains the recognition result of green channel.
B is subelement relatively, links to each other with described B passage subelement and training module, is used for that B is changed the red channel image of subelement conversion and blue channel tag file that described training module is exported compares, and obtains the recognition result of blue channel.
Gray scale is subelement relatively, links to each other with described gray scale passage subelement and training module, is used for the red channel image of gradation conversion subelement conversion and the gray scale channel characteristics file of described training module output are compared, and obtains the recognition result of gray scale passage.
Described comprehensive recognition unit 303, specifically be used for subelement, G relatively recognition result, the recognition result of green channel, the recognition result of blue channel and the recognition result of gray scale passage of the red channel that obtains of subelement and gray scale comparison subelement of subelement, B relatively relatively, comprehensively obtain final recognition result according to described R.
Recognition result to 4 kinds of images carries out comprehensive assessment, obtains final recognition result.Concrete comprehensive estimation method can be formulated according to actual needs, and for example in 4 groups of recognition results, having two or more to determine to identify is a certain gesture, and then final recognition result can determine that the image of catching is this gesture.
By the above embodiments as seen, this action recognition of the present utility model system, in training process, will train picture to be converted to gray level image and a plurality of channel images with different colors, and generate and described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively; In identifying, picture be will catch and gray level image and a plurality of channel images with different colors will be converted to, and compare with described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively, obtain corresponding recognition result, comprehensively obtain final recognition result according to all recognition results.Respectively action is trained and discerned by channel images with different colors and gray scale image, wherein a certain Color Channel may become the less color of intensity under the colour cast luminous environment, the discrimination of this moment will significantly improve, thereby be increased in the possibility that action is correctly validated under the colour cast luminous environment, reduced of the influence of colour cast luminous environment to action recognition, simultaneously, image conversion process does not need a lot of treatment capacities, and the process that each image is discerned is carried out simultaneously, overall very little to the influence of recognition speed, can ignore.
Institute is understood that; the above only is a better embodiment of the present utility model; and be not used in and limit protection domain of the present utility model; all within spirit of the present utility model and principle; any modification of being made, be equal to replacement, improvement etc., all should be included within the protection domain of the present utility model.

Claims (2)

1. an action recognition system is characterized in that, this system comprises:
Training module is used for the training picture is converted to gray level image and a plurality of channel images with different colors, and generates and described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively;
Image capture module is used for capturing motion, generates to catch picture;
Identification module, link to each other with described image capture module and training module, be used for the picture of catching that described image capture module generates is converted to gray level image and a plurality of channel images with different colors, and the gray level image and a plurality of channel images with different colors characteristic of correspondence file that generate with described training module respectively compare, obtain corresponding recognition result, comprehensively obtain final recognition result according to all recognition results.
2. action recognition as claimed in claim 1 system is characterized in that, described image capture module adopts camera or camera to realize.
CN200920105373XU 2009-02-12 2009-02-12 Motion recognition system Expired - Lifetime CN201465138U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200920105373XU CN201465138U (en) 2009-02-12 2009-02-12 Motion recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200920105373XU CN201465138U (en) 2009-02-12 2009-02-12 Motion recognition system

Publications (1)

Publication Number Publication Date
CN201465138U true CN201465138U (en) 2010-05-12

Family

ID=42392486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200920105373XU Expired - Lifetime CN201465138U (en) 2009-02-12 2009-02-12 Motion recognition system

Country Status (1)

Country Link
CN (1) CN201465138U (en)

Similar Documents

Publication Publication Date Title
CN105354530B (en) A kind of body color recognition methods and device
Yang et al. Metaanchor: Learning to detect objects with customized anchors
CN102257513B (en) Method for speeding up face detection
CN103488987B (en) Video-based method and device for detecting traffic lights
CN110443102B (en) Living body face detection method and device
CN104769652B (en) Method and system for detecting traffic lights
KR100983346B1 (en) System and method for recognition faces using a infra red light
CN106384117B (en) A kind of vehicle color identification method and device
CN109871780B (en) Face quality judgment method and system and face identification method and system
CN103455790A (en) Skin identification method based on skin color model
CN104408780A (en) Face recognition attendance system
KR101891631B1 (en) Image learnig device, image analysis system and method using the device, computer readable medium for performing the method
CN101477627A (en) Movement recognition method and system
CN104463125A (en) DSP-based automatic face detecting and tracking device and method
CN103065126B (en) Re-identification method of different scenes on human body images
CN105426816A (en) Method and device of processing face images
CN109259528A (en) A kind of home furnishings intelligent mirror based on recognition of face and skin quality detection
Doshi et al. " Hybrid Cone-Cylinder" Codebook Model for Foreground Detection with Shadow and Highlight Suppression
CN112001299A (en) Tunnel vehicle indicator and illuminating lamp fault identification method
CN106960424B (en) Tubercle bacillus image segmentation and identification method and device based on optimized watershed algorithm
CN109284759A (en) One kind being based on the magic square color identification method of support vector machines (svm)
CN201465138U (en) Motion recognition system
Lee et al. Research on face detection under different lighting
CN112200007A (en) License plate detection and identification method under community monitoring scene
CN109558872A (en) A kind of vehicle color identification method

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
CX01 Expiry of patent term

Granted publication date: 20100512

CX01 Expiry of patent term