CN101477627A - Movement recognition method and system - Google Patents

Movement recognition method and system Download PDF

Info

Publication number
CN101477627A
CN101477627A CNA2009100774675A CN200910077467A CN101477627A CN 101477627 A CN101477627 A CN 101477627A CN A2009100774675 A CNA2009100774675 A CN A2009100774675A CN 200910077467 A CN200910077467 A CN 200910077467A CN 101477627 A CN101477627 A CN 101477627A
Authority
CN
China
Prior art keywords
image
subelement
training
channel
different colors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2009100774675A
Other languages
Chinese (zh)
Inventor
钟文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Pixel Software Technology Co Ltd
Original Assignee
Beijing Pixel Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Pixel Software Technology Co Ltd filed Critical Beijing Pixel Software Technology Co Ltd
Priority to CNA2009100774675A priority Critical patent/CN101477627A/en
Publication of CN101477627A publication Critical patent/CN101477627A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an action recognition method. The method comprises the following steps: in the training process, converting training pictures into gray-level images and a plurality of channel images with different colors, and respectively generating signature files corresponding to the gray-level images and the channel images with different colors; in the recognition process, converting capture images into gray-level images and a plurality of channel images with different colors, comparing the gray-level images and the channel images with different colors with the signature files corresponding to the gray-level images and the channel images with different colors to obtain corresponding recognition results, and analyzing the corresponding recognition results to obtain the final recognition result. By adding training and recognition of channel images with different colors, the method can recognize actions in polarized light environment by using signature files of color images with weak color intensity, thereby improving the action recognition accuracy in the polarized light environment. The invention further discloses an action recognition system.

Description

A kind of action identification method and system
Technical field
The present invention relates to the action recognition field, particularly a kind of action identification method and system.
Background technology
Action recognition is very popular in recent years research field, by image-capturing apparatus, finishes the identifying to human action in the short period of time, and is converted to the operational order of equipment such as computing machine; Thereby being used as a kind of effective input medium is applied in the application fields such as recreation, film making.
The problem that action recognition at first will solve is to find the position of human action, and the position of human action is the foundation of action recognition, is commonly referred to as " concern position ".Because pay close attention to exposed position such as position behaviour face, hand usually, its color has bigger difference with environment, clothes, so can separate with non-concern lane place paying close attention to the position on color.For paying close attention to determining of position, general employing is a foundation based on the color histogram of color distribution statistics, specifically, be that human action is captured as static image to be identified, according to the image zones of different (two regional center positions, size have any difference then these two zones be zones of different) color count its color histogram, then each regional statistic histogram and default histogram are made comparisons, seek the most similar zone as last concern position.
But it is very high that this method requires color, and people's dress clothes, surrounding environment and colour of skin close with the colour of skin is close, single or the like the factor of surround lighting tone all can cause discrimination to decline to a great extent.And this method can only obtain the position of paying close attention in which position of image, and can't obtain to pay close attention to the implication that embodies of position.
For eliminating the influence of color to image recognition, existing recognition technology is converted into gray-scale map to image to be identified earlier usually, then the gray-scale map that obtains is discerned.After image to be identified is converted into gray-scale map, need from integral body, search out the concern position with recognition system, for example in people's full-length picture, find face or hand through the training of artificial intelligence technologys such as neural network according to features such as profile lines trend, each several part position relations.
At present, (the Open Computer Vision Library in the computer vision storehouse of increasing income, OpenCV) in the project, adopted a kind of image recognition algorithm based on simple feature cascade (Cascade of Simple Features), adopt the action recognition process of this algorithm to be broadly divided into two parts, at first to pass through training process generating feature file, the image of catching be discerned according to the tag file that generates by identifying afterwards, obtain recognition result.
Fig. 1 is existing training process process flow diagram, and as shown in Figure 1, this method comprises:
Step 101 is obtained the training picture.
Obtain the training picture that has motion characteristic that needs training, these pictures can be caught by image-capturing apparatus such as camera or cameras in advance.
Step 102 is demarcated operating position to be identified.
At the fixed particular location of action to be identified in picture of training picture subscript, just pay close attention to the position, can realize by editor's training picture.
Step 103 is created training sample.
Creating training sample can directly create according to the training picture, also can create training sample according to gray-scale map earlier with after training picture be converted to gray-scale map.
Step 104, the generating feature file.
Wherein in the step 104 during the generating feature file, can generate according to one or more training samples.
Fig. 2 is existing identifying process flow diagram, and as shown in Figure 2, this method comprises:
Step 201 is caught image.
Can catch motion images by image-capturing apparatus such as camera or cameras.
Step 202 is discerned according to tag file.
Can be directly to use the image and the tag file of catching to carry out the color histogram comparison during identification, can be earlier to discern according to the tag file of gray-scale map behind the gray-scale map the image transitions of catching also.
Step 203 is obtained recognition result.
In the said method, because the working environment and the training environment of action recognition system may have bigger difference, effect is fine in the time of may training, once arriving practical identification error.And under colour cast surround lighting (color that is surround lighting is single) irradiation, the part of different colours, such as clothes and skin, can present much the same color, seem that just brightness has a little difference, even at this time coloured image is converted into gray-scale map, originally there was the part of obvious color differentiating also may become and is difficult to differentiate, therefore no matter be to use coloured image or gray-scale map to train and discern, under the illuminate condition of colour cast surround lighting, there is the problem that is not easy to distinguish the concern position in the capital, thereby the accuracy of identification is reduced.
Summary of the invention
The embodiment of the invention provides a kind of action identification method, can reduce the influence of colour cast surround lighting to action recognition, improves recognition correct rate.
The embodiment of the invention provides a kind of action recognition system, can reduce the influence of colour cast surround lighting to action recognition, improves recognition correct rate.
For achieving the above object, technical scheme of the present invention specifically is achieved in that
A kind of action identification method, this method comprises:
In training process, will train picture to be converted to gray level image and a plurality of channel images with different colors, and generate and described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively;
In identifying, picture be will catch and gray level image and a plurality of channel images with different colors will be converted to, and compare with described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively, obtain corresponding recognition result, comprehensively obtain final recognition result according to all recognition results.
A kind of action recognition system, this system comprises:
Training module is used for the training picture is converted to gray level image and a plurality of channel images with different colors, and generates and described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively;
Image capture module is used for capturing motion, generates to catch picture.
Identification module, link to each other with described image capture module and training module, be used for the picture of catching that described image capture module generates is converted to gray level image and a plurality of channel images with different colors, and the gray level image and a plurality of channel images with different colors characteristic of correspondence file that generate with described training module respectively compare, obtain corresponding recognition result, comprehensively obtain final recognition result according to all recognition results.
As seen from the above technical solutions, this action identification method of the present invention and system, in training process, will train picture to be converted to gray level image and a plurality of channel images with different colors, and generate and described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively; In identifying, picture be will catch and gray level image and a plurality of channel images with different colors will be converted to, and compare with described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively, obtain corresponding recognition result, comprehensively obtain final recognition result according to all recognition results.By increasing the training and the identification of channel images with different colors, can under the colour cast luminous environment, use the more weak color image tag file of shared color intensity to discern, thereby improve the action recognition accuracy under the colour cast luminous environment.
Description of drawings
Fig. 1 is existing training process process flow diagram;
Fig. 2 is existing identifying process flow diagram;
Fig. 3 is the action identification method overview flow chart of the embodiment of the invention;
Fig. 4 is the training process process flow diagram of the embodiment of the invention;
Fig. 5 is the identifying process flow diagram of the embodiment of the invention;
Fig. 6 is the action recognition system architecture synoptic diagram of the embodiment of the invention;
Fig. 7 is the training module concrete structure synoptic diagram of the embodiment of the invention;
Fig. 8 is the identification module concrete structure synoptic diagram of the embodiment of the invention.
Embodiment
For making purpose of the present invention, technical scheme and advantage clearer, below with reference to the accompanying drawing embodiment that develops simultaneously, the present invention is described in more detail.
The present invention mainly is to be a plurality of different Color Channel image and gray level images with training image and image transitions to be identified in the process of image training and identification, and train respectively and discern, the recognition result of last comprehensive channel images with different colors and gray level image gets recognition result to the end.
Its principle is: colour cast surround lighting generally be the light of certain color be main, the color of in fact full spectrum has, so also comprise shades of colour in the reflected light, if the color big without intensity performs an analysis, do identification and use the more weak color of strength ratio instead, just can distinguish different colors fully.Therefore the present invention does not have simple use gray-scale map, but being divided into a plurality of different Color Channel image and gray-scale maps, cromogram processes respectively, last comprehensive all the different Color Channel images and the recognition result of gray-scale map just can get recognition result to the end.
Fig. 1 is the action identification method overview flow chart of the embodiment of the invention, and as shown in Figure 3, this method comprises the steps:
Step 301 in training process, will train picture to be converted to gray level image and a plurality of channel images with different colors, and generate and described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively;
Step 302, in identifying, picture be will catch and gray level image and a plurality of channel images with different colors will be converted to, and compare with described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively, obtain corresponding recognition result, comprehensively obtain final recognition result according to all recognition results.
Specifically can be included in the position of the fixed action to be identified of training picture acceptance of the bid in the training process, and create step such as training sample.Wherein, same as the prior art before the position of the fixed action to be identified of training picture acceptance of the bid occurs in conversion training picture, just repeated no more here; And create training sample is required step before the generating feature file, can create according to the concrete conversion regime correspondence of training picture, as to as described in gray level image and a plurality of channel images with different colors of training picture conversion, create corresponding training sample respectively, and then generate and described gray level image and a plurality of channel images with different colors characteristic of correspondence file according to described training sample.
And in the identifying, the conversion of catching picture and choosing are practiced in the process corresponding to the conversion of training picture, and convert the image of what type during training to, also be converted to the image of same type during identification, so just can carry out accordingly relatively and identification.
To the training picture with catch picture when changing, except that the gray scale image, can also determine to be converted to the image of what different color channels according to concrete needs, for example can be converted to the image of red (R), green (G), blue (B) three chrominance channels, also can convert the image of other Color Channels to, the quantity of conversion is many more, and recognition speed is slow more.
Carry out gesture identification with the recognizer of OpenCV below, and to be converted to red, green, blue three chrominance channel images be example, the detailed process of action identification method of the present invention is described.
Fig. 4 is the training process process flow diagram of the embodiment of the invention, and as shown in Figure 4, this flow process comprises the steps:
Step 401 is obtained the training picture.
Obtain the training picture that has motion characteristic that needs training, these pictures can be caught by image-capturing apparatus such as camera or cameras in advance.
Step 402 is demarcated operating position to be identified.
At the fixed particular location of action to be identified in picture of training picture subscript, just pay close attention to the position.
Step 403, conversion training picture.
To train picture to be converted to red, green, blue (R, G, B) three chrominance channel images and gray level image totally 4 groups of images.
Step 404 is created red, green, blue three chrominance channel images and gray level image training sample respectively.
Can use the createsample instrument of OpenCV to create training sample, for example when creating the training sample of gesture 1, the training picture of gesture 1 is positive sample, and the training picture of other gesture is a negative sample.The training sample creation method of other gestures is identical, and the like.
Step 405 is according to the training sample generation characteristic of correspondence file of red, green, blue three chrominance channel images and gray level image.
Can use the haartraining instrument of OpenCV to generate corresponding Haar feature cascade classifier file to each training sample, just with the training sample characteristic of correspondence file of red, green, blue three chrominance channel images and gray level image.
Fig. 5 is the identifying process flow diagram of the embodiment of the invention, and as shown in Figure 4, this flow process comprises the steps:
Step 501 is caught image.
Can catch the image of human body gesture by image-capturing apparatus such as camera or cameras.
Step 502, image is caught in conversion.
With the image transitions of catching is red, green, blue three chrominance channel images and gray level image totally 4 groups of images.
Step 503 is discerned the image of catching that is converted to red, green, blue three chrominance channel images and gray level image according to the tag file of red, green, blue three chrominance channel images and gray level image respectively.
Concrete recognition methods is same as the prior art, has just repeated no more here.
Step 504 is obtained the recognition result of red, green, blue three chrominance channel images and gray level image.
Respectively to red, green, blue three chrominance channel images and gray level image totally 4 images conduct a survey with the Haar sorter of each gesture correspondence, obtain recognition result.
Step 505 comprehensively obtains final recognition result.
Recognition result to 4 kinds of images carries out comprehensive assessment, obtains final recognition result.Concrete comprehensive estimation method can be formulated according to actual needs, and for example in 4 groups of recognition results, having two or more to determine to identify is a certain gesture, and then final recognition result can determine that the image of catching is this gesture.
The present invention provides a kind of action recognition system that realizes said method simultaneously, and Fig. 6 is the action recognition system architecture synoptic diagram of the embodiment of the invention, and as shown in Figure 6, this system comprises:
Training module 601 is used for the training picture is converted to gray level image and a plurality of channel images with different colors, and generates and described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively.
Image capture module 603 is used for capturing motion, generates to catch picture.
Identification module 602, link to each other with described image capture module 603 and training module 601, be used for the picture of catching that described image capture module 603 generates is converted to gray level image and a plurality of channel images with different colors, and the gray level image and a plurality of channel images with different colors characteristic of correspondence file that generate with described training module 601 respectively compare, obtain corresponding recognition result, comprehensively obtain final recognition result according to all recognition results.
Wherein, the concrete structure of training module as shown in Figure 7, Fig. 7 is the training module concrete structure synoptic diagram of the embodiment of the invention, preferably, training module can comprise:
Unit 701 is demarcated in action, is used for the position in the fixed action to be identified of training picture acceptance of the bid;
Training picture converting unit 702 links to each other with described demarcation unit, is used for the training picture that indicates operating position to be identified of described demarcation unit output is converted to gray level image and a plurality of channel images with different colors;
Sample creating unit 703 links to each other with described converting unit, is used for gray level image and a plurality of channel images with different colors of described converting unit conversion are created corresponding training sample respectively;
Tag file unit 704 links to each other with described sample creating unit, and the training sample of creating according to described sample creating unit generates and described gray level image and a plurality of channel images with different colors characteristic of correspondence file.
Preferably, if will train picture to be converted to the image and the gray level image of red (R), green (G), blue (B) three chrominance channels, described training picture converting unit 702 comprises:
R changes subelement, demarcates unit 701 with described action and links to each other, and is used for the training picture that indicates operating position to be identified of described demarcation unit output is converted to the red channel image;
G changes subelement, demarcates unit 701 with described action and links to each other, and is used for the training picture that indicates operating position to be identified of described demarcation unit output is converted to the green channel image;
B changes subelement, demarcates unit 701 with described action and links to each other, and is used for the training picture that indicates operating position to be identified of described demarcation unit output is converted to the blue channel image;
The gradation conversion subelement is demarcated unit 701 with described action and is linked to each other, and is used for the training picture that indicates operating position to be identified of described demarcation unit 701 outputs is converted to gray level image;
Correspondingly, described sample creating unit 703 is created respectively and red, green, blueness and the corresponding training sample of gray scale channel image, specifically comprises:
R creates subelement, links to each other with described R conversion subelement, is used to create the training sample that described R changes the red channel image of subelement output;
G creates subelement, links to each other with described G conversion subelement, is used to create the training sample that described G changes the green channel image of subelement output;
B creates subelement, links to each other with described B conversion subelement, is used to create the training sample that described B changes the blue channel image of subelement output;
Gray scale is created subelement, links to each other with described gradation conversion subelement, is used to create the training sample of the gray scale channel image that described gradation conversion subelement exports.
Correspondingly, described tag file unit 704 generates the training sample characteristic of correspondence file with red, green, blueness and gray scale channel image respectively, specifically comprises:
R feature subelement is created subelement with described R and is linked to each other, and is used to generate the training sample characteristic of correspondence file that described R creates the red channel image of subelement establishment;
G feature subelement is created subelement with described G and is linked to each other, and is used to generate the training sample characteristic of correspondence file that described G creates the green channel image of subelement establishment;
B feature subelement is created subelement with described B and is linked to each other, and is used to generate the training sample characteristic of correspondence file that described B creates the blue channel image of subelement establishment;
The gray feature subelement is created subelement with described gray scale and is linked to each other, and is used to generate the training sample characteristic of correspondence file that described gray scale is created the gray scale channel image of subelement establishment.
Fig. 8 is the action recognition module concrete structure synoptic diagram of the embodiment of the invention, and as shown in Figure 8, identification module comprises:
Catch picture converting unit 801, link to each other, be used for the picture of catching that described image capture module generates is converted to gray level image and a plurality of channel images with different colors with described image capture module 603;
Characteristics comparing unit 802, link to each other with institute training module 601, be used for described gray level image and a plurality of channel images with different colors of catching the conversion of picture converting unit compared with gray level image and a plurality of channel images with different colors characteristic of correspondence file that described training module generates respectively, obtain corresponding recognition result;
Comprehensive recognition unit 803 links to each other with described characteristics comparing unit 802, is used for comprehensively obtaining final recognition result according to all recognition results that described characteristics comparing unit obtains.
Preferably, will to train picture to be converted to the image and the gray level image of red (R), green (G), blue (B) three chrominance channels corresponding with training module 601, catches picture converting unit 801 and comprise:
R passage subelement links to each other with described image capture module 603, and the image transitions that is used for that described image capture module is caught is the red channel image;
G passage subelement links to each other with described image capture module 603, and the image transitions that is used for that described image capture module is caught is the green channel image;
B passage subelement links to each other with described image capture module 603, and the image transitions that is used for that described image capture module is caught is the blue channel image;
Gray scale passage subelement links to each other with described image capture module 603, and the image transitions that is used for that described image capture module is caught is the gray scale channel image;
Correspondingly, described characteristics comparing unit 802, be used for described gray level image and red, green, blue three chrominance channel images of catching 801 conversions of picture converting unit are compared with gray level image and the red, green, blue three chrominance channel image characteristic of correspondence files that described training module 601 generates respectively, obtain corresponding recognition result; Specifically comprise:
R is subelement relatively, links to each other with described R passage subelement and training module, is used for that R is changed the red channel image of subelement conversion and red channel tag file that described training module is exported compares, and obtains the recognition result of red channel.
G is subelement relatively, links to each other with described G passage subelement and training module, is used for that G is changed the red channel image of subelement conversion and green channel tag file that described training module is exported compares, and obtains the recognition result of green channel.
B is subelement relatively, links to each other with described B passage subelement and training module, is used for that B is changed the red channel image of subelement conversion and blue channel tag file that described training module is exported compares, and obtains the recognition result of blue channel.
Gray scale is subelement relatively, links to each other with described gray scale passage subelement and training module, is used for the red channel image of gradation conversion subelement conversion and the gray scale channel characteristics file of described training module output are compared, and obtains the recognition result of gray scale passage.
Described comprehensive recognition unit 803, specifically be used for subelement, G relatively recognition result, the recognition result of green channel, the recognition result of blue channel and the recognition result of gray scale passage of the red channel that obtains of subelement and gray scale comparison subelement of subelement, B relatively relatively, comprehensively obtain final recognition result according to described R.
In addition, in the action recognition system, described image capture module 603 can adopt image-capturing apparatus such as camera or camera to realize.
Specific implementation method such as each module, unit and executable operations can just repeat no more here with reference to above-mentioned action identification method embodiment in the above-mentioned action recognition system.
By the above embodiments as seen, this action identification method of the present invention, in training process, will train picture to be converted to gray level image and a plurality of channel images with different colors, and generate and described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively; In identifying, picture be will catch and gray level image and a plurality of channel images with different colors will be converted to, and compare with described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively, obtain corresponding recognition result, comprehensively obtain final recognition result according to all recognition results.Respectively action is trained and discerned by channel images with different colors and gray scale image, wherein a certain Color Channel may become the less color of intensity under the colour cast luminous environment, the discrimination of this moment will significantly improve, thereby be increased in the possibility that action is correctly validated under the colour cast luminous environment, reduced of the influence of colour cast luminous environment to action recognition, simultaneously, image conversion process does not need a lot of treatment capacities, and the process that each image is discerned is carried out simultaneously, overall very little to the influence of recognition speed, can ignore.
Institute is understood that; the above is a better embodiment of the present invention only, and is not intended to limit the scope of the invention, and is within the spirit and principles in the present invention all; any modification of being made, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1, a kind of action identification method is characterized in that, this method comprises:
In training process, will train picture to be converted to gray level image and a plurality of channel images with different colors, and generate and described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively;
In identifying, picture be will catch and gray level image and a plurality of channel images with different colors will be converted to, and compare with described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively, obtain corresponding recognition result, comprehensively obtain final recognition result according to all recognition results.
2, action identification method as claimed in claim 1 is characterized in that, described a plurality of channel images with different colors are red, green, blue three chrominance channel images.
3, action identification method as claimed in claim 1 or 2, it is characterized in that, described generation respectively and gray level image and a plurality of channel images with different colors characteristic of correspondence file, comprise: to gray level image and a plurality of channel images with different colors of described training picture conversion, create corresponding training sample respectively, generate and described gray level image and a plurality of channel images with different colors characteristic of correspondence file according to described training sample.
4, action identification method as claimed in claim 3 is characterized in that, the acceptance of the bid of described training picture remains the position of identification maneuver.
5, a kind of action recognition system is characterized in that this system comprises:
Training module is used for the training picture is converted to gray level image and a plurality of channel images with different colors, and generates and described gray level image and a plurality of channel images with different colors characteristic of correspondence file respectively;
Image capture module is used for capturing motion, generates to catch picture.
Identification module, link to each other with described image capture module and training module, be used for the picture of catching that described image capture module generates is converted to gray level image and a plurality of channel images with different colors, and the gray level image and a plurality of channel images with different colors characteristic of correspondence file that generate with described training module respectively compare, obtain corresponding recognition result, comprehensively obtain final recognition result according to all recognition results.
6, action recognition as claimed in claim 5 system is characterized in that described training module comprises:
The unit is demarcated in action, is used for the position in the fixed action to be identified of training picture acceptance of the bid;
Training picture converting unit links to each other with described demarcation unit, is used for the training picture that indicates operating position to be identified of described demarcation unit output is converted to gray level image and a plurality of channel images with different colors;
The sample creating unit links to each other with described converting unit, is used for gray level image and a plurality of channel images with different colors of described converting unit conversion are created corresponding training sample respectively;
The tag file unit links to each other with described sample creating unit, and the training sample of creating according to described sample creating unit generates and described gray level image and a plurality of channel images with different colors characteristic of correspondence file.
7, action recognition as claimed in claim 6 system is characterized in that described training picture converting unit comprises:
Red R conversion subelement is demarcated the unit with described action and is linked to each other, and is used for the training picture that indicates operating position to be identified of described demarcation unit output is converted to the red channel image;
Green G conversion subelement is demarcated the unit with described action and is linked to each other, and is used for the training picture that indicates operating position to be identified of described demarcation unit output is converted to the green channel image;
Blue B conversion subelement is demarcated the unit with described action and is linked to each other, and is used for the training picture that indicates operating position to be identified of described demarcation unit output is converted to the blue channel image;
The gradation conversion subelement is demarcated the unit with described action and is linked to each other, and is used for the training picture that indicates operating position to be identified of described demarcation unit output is converted to gray level image.
Described sample creating unit comprises:
R creates subelement, links to each other with described R conversion subelement, is used to create the training sample that described R changes the red channel image of subelement output;
G creates subelement, links to each other with described G conversion subelement, is used to create the training sample that described G changes the green channel image of subelement output;
B creates subelement, links to each other with described B conversion subelement, is used to create the training sample that described B changes the blue channel image of subelement output;
Gray scale is created subelement, links to each other with described gradation conversion subelement, is used to create the training sample of the gray scale channel image that described gradation conversion subelement exports.
Described tag file unit comprises:
R feature subelement is created subelement with described R and is linked to each other, and is used to generate the training sample characteristic of correspondence file that described R creates the red channel image of subelement establishment;
G feature subelement is created subelement with described G and is linked to each other, and is used to generate the training sample characteristic of correspondence file that described G creates the green channel image of subelement establishment;
B feature subelement is created subelement with described B and is linked to each other, and is used to generate the training sample characteristic of correspondence file that described B creates the blue channel image of subelement establishment;
The gray feature subelement is created subelement with described gray scale and is linked to each other, and is used to generate the training sample characteristic of correspondence file that described gray scale is created the gray scale channel image of subelement establishment.
8, as the described action recognition of each claim system in the claim 7, it is characterized in that described identification module comprises:
Catch the picture converting unit, link to each other, be used for the picture of catching that described image capture module generates is converted to gray level image and a plurality of channel images with different colors with described image capture module;
Characteristics comparing unit, link to each other with described training module, be used for described gray level image and a plurality of channel images with different colors of catching the conversion of picture converting unit compared with gray level image and a plurality of channel images with different colors characteristic of correspondence file that described training module generates respectively, obtain corresponding recognition result;
Comprehensive recognition unit links to each other with described characteristics comparing unit, is used for comprehensively obtaining final recognition result according to all recognition results that described characteristics comparing unit obtains.
9, action recognition as claimed in claim 8 system is characterized in that the described picture converting unit of catching comprises:
R passage subelement links to each other with described image capture module, and the image transitions that is used for that described image capture module is caught is the red channel image;
G passage subelement links to each other with described image capture module, and the image transitions that is used for that described image capture module is caught is the green channel image;
B passage subelement links to each other with described image capture module, and the image transitions that is used for that described image capture module is caught is the blue channel image;
Gray scale passage subelement links to each other with described image capture module, and the image transitions that is used for that described image capture module is caught is the gray scale channel image.
Described characteristics comparing unit, be used for described gray level image and red, green, blue three chrominance channel images of catching the conversion of picture converting unit are compared with gray level image and the red, green, blue three chrominance channel image characteristic of correspondence files that described training module generates respectively, obtain corresponding recognition result;
Described comprehensive recognition unit is used for comprehensively obtaining final recognition result according to the gray level image of described characteristics comparing unit acquisition and the recognition result of red, green, blue three chrominance channel images.
As the described action recognition of arbitrary claim system in the claim 5~9, it is characterized in that 10, described image capture module adopts camera or camera to realize.
CNA2009100774675A 2009-02-12 2009-02-12 Movement recognition method and system Pending CN101477627A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2009100774675A CN101477627A (en) 2009-02-12 2009-02-12 Movement recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2009100774675A CN101477627A (en) 2009-02-12 2009-02-12 Movement recognition method and system

Publications (1)

Publication Number Publication Date
CN101477627A true CN101477627A (en) 2009-07-08

Family

ID=40838336

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2009100774675A Pending CN101477627A (en) 2009-02-12 2009-02-12 Movement recognition method and system

Country Status (1)

Country Link
CN (1) CN101477627A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839042A (en) * 2012-11-27 2014-06-04 腾讯科技(深圳)有限公司 Human face recognition method and human face recognition system
WO2014101219A1 (en) * 2012-12-31 2014-07-03 青岛海信信芯科技有限公司 Action recognition method and television
CN109269483A (en) * 2018-09-20 2019-01-25 国家体育总局体育科学研究所 A kind of scaling method of motion capture node, calibration system and calibration base station
CN112257619A (en) * 2020-10-27 2021-01-22 北京澎思科技有限公司 Target re-identification method, device, equipment and storage medium
CN115272138A (en) * 2022-09-28 2022-11-01 荣耀终端有限公司 Image processing method and related device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839042A (en) * 2012-11-27 2014-06-04 腾讯科技(深圳)有限公司 Human face recognition method and human face recognition system
CN103839042B (en) * 2012-11-27 2017-09-22 腾讯科技(深圳)有限公司 Face identification method and face identification system
WO2014101219A1 (en) * 2012-12-31 2014-07-03 青岛海信信芯科技有限公司 Action recognition method and television
CN109269483A (en) * 2018-09-20 2019-01-25 国家体育总局体育科学研究所 A kind of scaling method of motion capture node, calibration system and calibration base station
CN109269483B (en) * 2018-09-20 2020-12-15 国家体育总局体育科学研究所 Calibration method, calibration system and calibration base station for motion capture node
CN112257619A (en) * 2020-10-27 2021-01-22 北京澎思科技有限公司 Target re-identification method, device, equipment and storage medium
CN115272138A (en) * 2022-09-28 2022-11-01 荣耀终端有限公司 Image processing method and related device
CN115272138B (en) * 2022-09-28 2023-02-21 荣耀终端有限公司 Image processing method and related device

Similar Documents

Publication Publication Date Title
Yang et al. Metaanchor: Learning to detect objects with customized anchors
CN102257513B (en) Method for speeding up face detection
CN110796046A (en) Intelligent steel slag detection method and system based on convolutional neural network
CN104769652B (en) Method and system for detecting traffic lights
CN108280426B (en) Dark light source expression identification method and device based on transfer learning
CN110443102B (en) Living body face detection method and device
CN110675328A (en) Low-illumination image enhancement method and device based on condition generation countermeasure network
KR100983346B1 (en) System and method for recognition faces using a infra red light
KR20100073189A (en) Apparatus and method for detecting face image
CN101477627A (en) Movement recognition method and system
KR101891631B1 (en) Image learnig device, image analysis system and method using the device, computer readable medium for performing the method
CN103455790A (en) Skin identification method based on skin color model
Doshi et al. " Hybrid Cone-Cylinder" Codebook Model for Foreground Detection with Shadow and Highlight Suppression
CN106960424B (en) Tubercle bacillus image segmentation and identification method and device based on optimized watershed algorithm
CN112001299A (en) Tunnel vehicle indicator and illuminating lamp fault identification method
CN108830908A (en) A kind of magic square color identification method based on artificial neural network
CN107038690A (en) A kind of motion shadow removal method based on multi-feature fusion
CN113435514A (en) Construction waste fine classification method and device based on meta-deep learning
CN107885324A (en) A kind of man-machine interaction method based on convolutional neural networks
Lee et al. Research on face detection under different lighting
CN108154116A (en) A kind of image-recognizing method and system
CN201465138U (en) Motion recognition system
Nahrawi et al. Contrast enhancement approaches on medical microscopic images: a review
Shim et al. License Plates Detection and Recognition with Multi-Exposure Images
KR102139932B1 (en) A Method of Detecting Character Data through a Adaboost Learning Method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Open date: 20090708