CN108205651A - A kind of recognition methods of action of having a meal and device - Google Patents

A kind of recognition methods of action of having a meal and device Download PDF

Info

Publication number
CN108205651A
CN108205651A CN201611187196.5A CN201611187196A CN108205651A CN 108205651 A CN108205651 A CN 108205651A CN 201611187196 A CN201611187196 A CN 201611187196A CN 108205651 A CN108205651 A CN 108205651A
Authority
CN
China
Prior art keywords
meal
face
region
area
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611187196.5A
Other languages
Chinese (zh)
Other versions
CN108205651B (en
Inventor
彭巍
范晓晖
杨新苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201611187196.5A priority Critical patent/CN108205651B/en
Publication of CN108205651A publication Critical patent/CN108205651A/en
Application granted granted Critical
Publication of CN108205651B publication Critical patent/CN108205651B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Recognition methods and device an embodiment of the present invention provides a kind of action of having a meal, the method includes:Every frame video image of acquisition is detected, determines to detect face;Region division is carried out to the zone of action of having a meal using the face detected as with reference to the zone of action of having a meal chosen in every frame video image, and according to the first preset condition, obtains multiple video image regions;Moving region detection is carried out to the video image region corresponding with hand that division obtains, obtains the moving region in the video image region;Face Detection is carried out to the moving region, obtains the area of skin color in the moving region;The location parameter and the second preset condition of location parameter, face according to the area of skin color, determine to detect action of having a meal.

Description

A kind of recognition methods of action of having a meal and device
Technical field
The present invention relates to the recognition methods and device of technical field of machine vision more particularly to a kind of action of having a meal.
Background technology
With the appearance of social senilization's problem, endowment problem is gradually of interest by people.With the development of science and technology intelligence The novel Aged mode ever more populars such as intelligent endowment, wherein, the identification for action of having a meal to old man can be applied to wisdom endowment In mode, i.e.,:By the collection analysis for behavioral data of having a meal to old man, it is movable to old man's daily behavior and health to realize Monitoring.Therefore, need to be proposed a kind of technical solution that action of having a meal is identified.
Invention content
In view of this, the recognition methods an embodiment of the present invention is intended to provide a kind of action of having a meal and device, may recognize that and eat The action of meal.
In order to achieve the above objectives, the technical solution of the embodiment of the present invention is realized in:
An embodiment of the present invention provides a kind of recognition methods for action of having a meal, this method includes:
Every frame video image of acquisition is detected, determines to detect face;
Zone of action of having a meal in being chosen using the face detected as reference per frame video image, and it is pre- according to first If condition carries out region division to the zone of action of having a meal, multiple video image regions are obtained;
Moving region detection is carried out to the video image region corresponding with hand that division obtains, obtains the video Moving region in image-region;
Face Detection is carried out to the moving region, obtains the area of skin color in the moving region;
The location parameter and the second preset condition of location parameter, face according to the area of skin color, determine to detect It has a meal action.
It is described using the face detected as with reference to the behaviour area of having a meal chosen in every frame video image in said program Domain, including:
Using the face detected as reference, in the vertical direction, the top of the zone of action of having a meal is positioned at described The top of face;In the horizontal direction, the face is located at the centre position of the zone of action of having a meal;The behaviour area of having a meal Domain is rectangle, a height of 3H of the rectangle, and width 5W, the H are the height of face in the video image, and the W is described The width of face in video image.
In said program, first preset condition of foundation carries out region division to the zone of action of having a meal, including:
The zone of action of having a meal from left to right is divided into three regions in the horizontal direction, intermediate second area is Region where the face, the left side of second area is first area, and the right side of second area is third region;Described first The height in region, second area and third region is 3H, and the width of the first area, second area and third region is respectively 2W, W and 2W.
In said program, described pair divides the obtained video image region corresponding with hand and carries out moving region inspection It surveys, including:
Moving region detection is carried out to the video image of the first area and/or third region.
In said program, the location parameter of the foundation area of skin color, the location parameter and the second preset condition of face, Determine to detect action of having a meal, including:
Obtain the ordinate y of the face bottom position1With the ordinate y at the top of the area of skin color2Both, and calculate Difference △ y=y1-y2
If in T time, the number that the △ y meet second preset condition is more than first threshold m, it is determined that inspection Measure action of having a meal;
Wherein, second preset condition is:Within the t times, the probability that the △ y meet condition-H/4 < △ y < H surpasses Cross second threshold p;The t < T.
The embodiment of the present invention additionally provides a kind of identification device for action of having a meal, which includes:
Face detection module is detected for every frame video image to acquisition, determines to detect face;
Region division module, for using the face detected as with reference to the activity of having a meal chosen in every frame video image Region, and region division is carried out to the zone of action of having a meal according to the first preset condition, obtain multiple video image regions;
Motion detection block, for carrying out moving region to the video image region corresponding with hand that division obtains Detection, obtains the moving region in the video image region;
Skin tone detection module for carrying out Face Detection to the moving region, obtains the colour of skin in the moving region Region;
Motion detection module, for location parameter, the location parameter of face and second pre- according to the area of skin color If condition, determine to detect action of having a meal.
In said program, the region division module includes:
Selection unit, for using the face detected as with reference to the behaviour area of having a meal chosen in every frame video image Domain;
Division unit for carrying out region division to the zone of action of having a meal according to the first preset condition, obtains multiple Video image region.
In said program, the selection unit, for using the face detected as reference, in the vertical direction, institute The top for stating zone of action of having a meal is located at the top of the face;In the horizontal direction, the face is located at the activity of having a meal The centre position in region;Rectangle is in the zone of action of having a meal, a height of 3H of the rectangle, and width 5W, the H are the video The height of face in image, the W are the width of face in the video image.
In said program, the division unit, in the horizontal direction from left to right drawing the zone of action of having a meal It is divided into three regions, intermediate second area is the region where the face, and the left side of second area is first area, second The right side in region is third region;The height of the first area, second area and third region is 3H, the first area, Second area and the width in third region are respectively 2W, W and 2W.
In said program, the motion detection module includes:
Computing unit, for obtaining the ordinate y of the face bottom position1With the ordinate at the top of the area of skin color y2, and calculate the difference △ y=y of the two1-y2
Determination unit, for judging in T time, the number that the △ y meet second preset condition is more than first During threshold value m, determine to detect action of having a meal;
Wherein, second preset condition is:Within the t times, the probability that the △ y meet condition-H/4 < △ y < H surpasses Cross second threshold p;The t < T.
The recognition methods of action provided in an embodiment of the present invention of having a meal and device examine every frame video image of acquisition It surveys, determines to detect face;Zone of action of having a meal in being chosen using the face detected as reference per frame video image, and Region division is carried out to the zone of action of having a meal according to the first preset condition, obtains multiple video image regions;To dividing The video image region corresponding with hand arrived carries out moving region detection, obtains the movement in the video image region Region;Face Detection is carried out to the moving region, obtains the area of skin color in the moving region;According to the area of skin color Location parameter, face location parameter and the second preset condition, determine to detect action of having a meal.The embodiment of the present invention is to obtaining The video image taken carries out region division, and carries out the operations such as moving region detection and Face Detection, therefore Face Detection Success rate is high, filters out the noisy region to Face Detection, the knowledge for the action that can finally carry out having a meal based on obtained area of skin color Not, this method can effectively be applied to wisdom endowment field.
Description of the drawings
Recognition methods flow diagrams one of the Fig. 1 for action of having a meal described in the embodiment of the present invention;
Identification device structure diagrams of the Fig. 2 for action of having a meal described in the embodiment of the present invention;
Fig. 3 is the structure diagram of region division module described in the embodiment of the present invention;
Fig. 4 is the structure diagram of motion detection module described in the embodiment of the present invention;
Recognition methods flow diagrams two of the Fig. 5 for action of having a meal described in the embodiment of the present invention;
Fig. 6 is the embodiment of the present invention to detecting that the image of face carries out the schematic diagram of region division;
Fig. 7 is the area of skin color schematic diagram measured described in the embodiment of the present invention;
Fig. 8 is the schematic diagram of test sample in application process of the embodiment of the present invention.
Specific embodiment
Present invention is described with reference to the accompanying drawings and examples.
Recognition methods flow diagrams one of the Fig. 1 for action of having a meal described in the embodiment of the present invention, as shown in Figure 1, this method Including:
Step 101:Every frame video image of acquisition is detected, determines to detect face;
Step 102:Zone of action of having a meal in being chosen using the face detected as reference per frame video image, and according to Region division is carried out to the zone of action of having a meal according to the first preset condition, obtains multiple video image regions;
Step 103:Moving region detection is carried out to the video image region corresponding with hand that division obtains, is obtained Moving region in the video image region;
Step 104:Face Detection is carried out to the moving region, obtains the area of skin color in the moving region;
Step 105:The location parameter and the second preset condition of location parameter, face according to the area of skin color, really Regular inspection measures action of having a meal.
The embodiment of the present invention carries out region division to the video image of acquisition, and carries out moving region detection and colour of skin inspection The operations such as survey, therefore the success rate of Face Detection is high, filters out the noisy region to Face Detection, it finally can be based on obtained skin Color region have a meal the identification of action, and this method can effectively be applied to wisdom endowment field.
Here, the detection to face can be realized by the Adaboost algorithm of Haar features.Certainly, for ability The method for detecting human face is not limited to a kind of method for field technique personnel;The moving region detection can be used but be not limited to ViBe+ algorithms;The Face Detection can be used but be not limited only to the skin color detection algorithm of YCbCr model of ellipse, and above-mentioned algorithm is equal For the prior art, it is not described in detail herein.
The present invention is in practical application, assume the harvester of acquisition video image, such as camera face face.
It is described using the face detected as with reference to the work of having a meal chosen in every frame video image in the embodiment of the present invention Dynamic region, including:
Using the face detected as reference, in the vertical direction, the top of the zone of action of having a meal is positioned at described The top of face;In the horizontal direction, the face is located at the centre position of the zone of action of having a meal;The behaviour area of having a meal Domain is rectangle, a height of 3H of the rectangle, and width 5W, the H are the height of face in the video image, and the W is described The width of face in video image.
In the embodiment of the present invention, first preset condition of foundation carries out region division, packet to the zone of action of having a meal It includes:
The zone of action of having a meal from left to right is divided into three regions in the horizontal direction, intermediate second area is Region where the face, the left side of second area is first area, and the right side of second area is third region;Described first The height in region, second area and third region is 3H, and the width of the first area, second area and third region is respectively 2W, W and 2W.
In the embodiment of the present invention, described pair divides the obtained video image region corresponding with hand and carries out motor area Domain is detected, including:
Moving region detection is carried out to the video image of the first area and/or third region.
It is described default according to the location parameter of area of skin color, the location parameter of face and second in the embodiment of the present invention Condition determines to detect action of having a meal, including:
Obtain the ordinate y of the face bottom position1With the ordinate y at the top of the area of skin color2Both, and calculate Difference △ y=y1-y2
If in T (such as 2 minutes) in the time, the number that the △ y meet second preset condition is more than first threshold m (such as 4 times), it is determined that detect action of having a meal;
Wherein, second preset condition is:In t (such as 15 seconds) in the time, the △ y meet condition-H/4 < △ y < H Probability be more than second threshold p (such as 20%);The t < T.
The embodiment of the present invention additionally provides a kind of identification device for action of having a meal, and is used to implement above-described embodiment and preferred reality Mode is applied, had carried out repeating no more for explanation.As used below, term " module " " unit " can realize predetermined work( The combination of the software and/or hardware of energy.As shown in Fig. 2, the device includes:
Face detection module 21 is detected for every frame video image to acquisition, determines to detect face;
Region division module 22, for using the face detected as with reference to the work of having a meal chosen in every frame video image Dynamic region, and region division is carried out to the zone of action of having a meal according to the first preset condition, obtain multiple video image regions;
Motion detection block 23, for carrying out motor area to the video image region corresponding with hand that division obtains Domain is detected, and obtains the moving region in the video image region;
Skin tone detection module 24 for carrying out Face Detection to the moving region, obtains the skin in the moving region Color region;
Motion detection module 25, for location parameter, the location parameter of face and second according to the area of skin color Preset condition determines to detect action of having a meal.
The embodiment of the present invention carries out region division to the video image of acquisition, and carries out moving region detection and colour of skin inspection The operations such as survey, therefore the success rate of Face Detection is high, filters out the noisy region to Face Detection, it finally can be based on obtained skin Color region have a meal the identification of action, and this method can effectively be applied to wisdom endowment field.
Here, the detection to face can be realized by the Adaboost algorithm of Haar features.Certainly, for ability The method for detecting human face is not limited to a kind of method for field technique personnel;The moving region detection can be used but be not limited to ViBe+ algorithms;The Face Detection can be used but be not limited only to the skin color detection algorithm of YCbCr model of ellipse, and above-mentioned algorithm is equal For the prior art, it is not described in detail herein.
The present invention is in practical application, assume the harvester of acquisition video image, such as camera face face.
In the embodiment of the present invention, as shown in figure 3, the region division module 22 includes:
Selection unit 221, for using the face detected as with reference to the activity of having a meal chosen in every frame video image Region;
Division unit 222 for carrying out region division to the zone of action of having a meal according to the first preset condition, obtains more A video image region.
In the embodiment of the present invention, the selection unit 221, for using the face detected as reference, in vertical side Upwards, the top of the zone of action of having a meal is located at the top of the face;In the horizontal direction, the face is located at described eat The centre position of meal zone of action;Rectangle is in the zone of action of having a meal, and a height of 3H of the rectangle, width 5W, the H are institute The height of face in video image is stated, the W is the width of face in the video image.
In the embodiment of the present invention, the division unit 222, in the horizontal direction from left to right by the activity of having a meal Region division is into three regions, and intermediate second area is the region where the face, and the left side of second area is the firstth area Domain, the right side of second area is third region;The height of the first area, second area and third region is 3H, and described The width in one region, second area and third region is respectively 2W, W and 2W.
In the embodiment of the present invention, as shown in figure 4, the motion detection module 25 includes:
Computing unit 251, for obtaining the ordinate y of the face bottom position1With indulging at the top of the area of skin color Coordinate y2, and calculate the difference △ y=y of the two1-y2
Determination unit 252, for judging that the △ y meet second preset condition in T (such as 2 minutes) in the time When number is more than first threshold m (such as 4 times), determine to detect action of having a meal;
Wherein, second preset condition is:In t (such as 15 seconds) in the time, the △ y meet condition-H/4 < △ y < H Probability be more than second threshold p (such as 20%);The t < T.
With reference to scene embodiment, the present invention will be described in detail.
The system group network of the present embodiment includes following network element:
Target to be detected, i.e.,:People to be detected;Camera is had a meal the video data of action for acquiring people;Video processing Unit (identification device of i.e. above-mentioned action of having a meal), for handling video data, is analyzed and is judged to action of having a meal.
The system have a meal action recognition flow as shown in figure 5, including:
Step 501:Face datection is carried out, judges whether to detect face, if it is, performing step 502;Otherwise, after It is continuous to carry out Face datection;
Step 502:Zone of action of having a meal is chosen from the facial video image detected, and carries out region division;
Step 503:Moving region detection is carried out in the corresponding region of hand, judgement detects moving region, then performs Step 504;Otherwise, return to step 501;
Step 504:Face Detection is carried out to the moving region detected, if detecting area of skin color, performs step 505;Otherwise, return to step 501;
Step 505:The detection of the distribution of movement of area of skin color to detecting, it is determined whether detect action of having a meal.
Wherein, Face datection using the Adaboost algorithm based on Haar features, that is, uses in the step 501 It is the rectangular characteristic of input picture, is also Haar features.In calculating process, to camera it is collected per frame video image into Row calculates, and will eventually get the coordinate of two points, the rectangular image of face is finally obtained according to 2 points of the coordinate.The calculating side Method is the prior art, is not described in detail herein.Certainly, other Face datections also can be used to those skilled in the art Algorithm.
In step 502, in order to improve the accuracy by Face Detection human hand, using face as reference system, by video image The zone of action of having a meal of a 3X5 (face) is marked, i.e.,:The height of face in a height of 3 times of video images of the rectangular area Height, width width of the width for face in 5 times of video images, as shown in fig. 6, and being further subdivided into three rectangular areas: One Rect1 of region (left side), two Rect2 of region (in) and three Rect3 of region (right side).
By actual test, there are two advantages for such detection zone dividing mode:
1) right-hand man can be preferably divided into different regions, improve the recognition accuracy of human hand;
2) action of right-hand man when having a meal can preferably be portrayed:Hold chopsticks hand typically occur in Rect1 (left-handed person) or Person Rect3 (right-handed person) typically occurs in Rect2 by the hand of bowl.
Moving region detection in step 503 can adopt but be not limited to be carried out with ViBe+ algorithms, and this method is similarly existing Algorithm is not described in detail herein, and those skilled in the art also can be used other moving regions maintenance algorithm and realize.Here, first Judge moving region, then judge area of skin color, the success rate of Face Detection can be effectively improved, filter out has interference to Face Detection Region.
Based on foregoing description, the Face Detection in step 504 is to the corresponding moving region in Rect1 and Rect3 regions It is detected, area of skin color detects the skin color detection algorithm that can be used but be not limited only to YCbCr model of ellipse, i.e.,:If by skin Skin information MAP is to YCrCb spaces, then these skin pixels o'clock are distributed similar to an ellipse in CrCb two-dimensional spaces.Cause This, if having obtained the ellipse of a CrCb, encounters a coordinate (Cr, Cb) next time, need to only judge whether it (wraps in ellipse Include boundary), if it is, may determine that it, for skin, is otherwise exactly non-skin pixel.Specific algorithm is not described in detail.
Here, after obtaining area of skin color, by detecting the palmistry in Rect1 and/or Rect3 in area of skin color for face Motion feature, to determine whether being at table, judge that algorithm is as follows:
First, Δ y=y is calculated1-y2, as shown in fig. 7, the y1For the ordinate of face bottom position, described is y2Skin Ordinate at the top of color region;
Secondly, if statistics within T (such as the 2 minutes) times, the △ y meet following condition total degree more than m (such as 4 times), then it is judged as detecting action of having a meal;The condition is:The probability of-height/4 < Δ y < height be more than p (such as 20%), test sample during practical application can be as shown in Figure 8.
It should be understood by those skilled in the art that, the embodiment of the present invention can be provided as method, system or computer program Product.Therefore, the shape of the embodiment in terms of hardware embodiment, software implementation or combination software and hardware can be used in the present invention Formula.Moreover, the present invention can be used can use storage in one or more computers for wherein including computer usable program code The form of computer program product that medium is implemented on (including but not limited to magnetic disk storage and optical memory etc.).
The present invention be with reference to according to the method for the embodiment of the present invention, the flow of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that it can be realized by computer program instructions every first-class in flowchart and/or the block diagram The combination of flow and/or box in journey and/or box and flowchart and/or the block diagram.These computer programs can be provided The processor of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce A raw machine so that the instruction performed by computer or the processor of other programmable data processing devices is generated for real The device of function specified in present one flow of flow chart or one box of multiple flows and/or block diagram or multiple boxes.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works so that the instruction generation being stored in the computer-readable memory includes referring to Enable the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one box of block diagram or The function of being specified in multiple boxes.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted Series of operation steps are performed on calculation machine or other programmable devices to generate computer implemented processing, so as in computer or The instruction offer performed on other programmable devices is used to implement in one flow of flow chart or multiple flows and/or block diagram one The step of function of being specified in a box or multiple boxes.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention.

Claims (10)

1. a kind of recognition methods for action of having a meal, which is characterized in that this method includes:
Every frame video image of acquisition is detected, determines to detect face;
Zone of action of having a meal in being chosen using the face detected as reference per frame video image, and according to the first default item Part carries out region division to the zone of action of having a meal, and obtains multiple video image regions;
Moving region detection is carried out to the video image region corresponding with hand that division obtains, obtains the video image Moving region in region;
Face Detection is carried out to the moving region, obtains the area of skin color in the moving region;
The location parameter and the second preset condition of location parameter, face according to the area of skin color, determine to detect and have a meal Action.
It is 2. according to the method described in claim 1, it is characterized in that, described using the face detected as with reference to the every frame of selection Zone of action of having a meal in video image, including:
Using the face detected as reference, in the vertical direction, the top of the zone of action of having a meal is located at the face Top;In the horizontal direction, the face is located at the centre position of the zone of action of having a meal;The zone of action of having a meal is Rectangle, a height of 3H of the rectangle, width 5W, the H are the height of face in the video image, and the W is the video The width of face in image.
3. according to the method described in claim 2, it is characterized in that, the first preset condition of the foundation is to the behaviour area of having a meal Domain carries out region division, including:
The zone of action of having a meal from left to right is divided into three regions in the horizontal direction, intermediate second area is described Region where face, the left side of second area is first area, and the right side of second area is third region;Firstth area The height in domain, second area and third region is 3H, and the width of the first area, second area and third region is respectively 2W, W And 2W.
4. according to the method described in claim 3, it is characterized in that, described pair divides the obtained video corresponding with hand Image-region carries out moving region detection, including:
Moving region detection is carried out to the video image of the first area and/or third region.
5. according to the method described in any one of claim 2-4, which is characterized in that join the position according to area of skin color Number, the location parameter of face and the second preset condition, determine to detect action of having a meal, including:
Obtain the ordinate y of the face bottom position1With the ordinate y at the top of the area of skin color2, and calculate the difference of the two Value △ y=y1-y2
If in T time, the number that the △ y meet second preset condition is more than first threshold m, it is determined that detects It has a meal action;
Wherein, second preset condition is:Within the t times, the probability that the △ y meet condition-H/4 < △ y < H is more than the Two threshold value p;The t < T.
6. a kind of identification device for action of having a meal, which is characterized in that the device includes:
Face detection module is detected for every frame video image to acquisition, determines to detect face;
Region division module, for using the face detected as with reference to the behaviour area of having a meal chosen in every frame video image Domain, and region division is carried out to the zone of action of having a meal according to the first preset condition, obtain multiple video image regions;
Motion detection block, for carrying out moving region inspection to the video image region corresponding with hand that division obtains It surveys, obtains the moving region in the video image region;
Skin tone detection module for carrying out Face Detection to the moving region, obtains the area of skin color in the moving region;
Motion detection module, for location parameter, the location parameter of face and the second default item according to the area of skin color Part determines to detect action of having a meal.
7. device according to claim 6, which is characterized in that the region division module includes:
Selection unit, for using the face detected as with reference to the zone of action of having a meal chosen in every frame video image;
Division unit for carrying out region division to the zone of action of having a meal according to the first preset condition, obtains multiple videos Image-region.
8. device according to claim 7, which is characterized in that
The selection unit, for using the face detected as reference, in the vertical direction, the zone of action of having a meal Top is located at the top of the face;In the horizontal direction, the face is located at the centre position of the zone of action of having a meal;Institute Zone of action of having a meal is stated as rectangle, a height of 3H of the rectangle, width 5W, the H are the height of face in the video image, The W is the width of face in the video image.
9. device according to claim 8, which is characterized in that
The division unit, for the zone of action of having a meal from left to right to be divided into three regions in the horizontal direction, in Between second area be the region where the face, the left side of second area is first area, and the right side of second area is the Three regions;The height of the first area, second area and third region is 3H, the first area, second area and third The width in region is respectively 2W, W and 2W.
10. device according to claim 8 or claim 9, which is characterized in that the motion detection module includes:
Computing unit, for obtaining the ordinate y of the face bottom position1With the ordinate y at the top of the area of skin color2, and Calculate the difference △ y=y of the two1-y2
Determination unit, for judging in T time, the number that the △ y meet second preset condition is more than first threshold m When, determine to detect action of having a meal;
Wherein, second preset condition is:Within the t times, the probability that the △ y meet condition-H/4 < △ y < H is more than the Two threshold value p;The t < T.
CN201611187196.5A 2016-12-20 2016-12-20 Eating action recognition method and device Active CN108205651B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611187196.5A CN108205651B (en) 2016-12-20 2016-12-20 Eating action recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611187196.5A CN108205651B (en) 2016-12-20 2016-12-20 Eating action recognition method and device

Publications (2)

Publication Number Publication Date
CN108205651A true CN108205651A (en) 2018-06-26
CN108205651B CN108205651B (en) 2021-04-06

Family

ID=62603605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611187196.5A Active CN108205651B (en) 2016-12-20 2016-12-20 Eating action recognition method and device

Country Status (1)

Country Link
CN (1) CN108205651B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509088A (en) * 2011-11-28 2012-06-20 Tcl集团股份有限公司 Hand motion detecting method, hand motion detecting device and human-computer interaction system
CN102521579A (en) * 2011-12-21 2012-06-27 Tcl集团股份有限公司 Method for identifying pushing action based on two-dimensional planar camera and system
CN103839046A (en) * 2013-12-26 2014-06-04 苏州清研微视电子科技有限公司 Automatic driver attention identification system and identification method thereof
CN104156717A (en) * 2014-08-31 2014-11-19 王好贤 Method for recognizing rule breaking of phoning of driver during driving based on image processing technology
US20150117725A1 (en) * 2013-05-21 2015-04-30 Tencent Technology (Shenzhen) Company Limited Method and electronic equipment for identifying facial features

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509088A (en) * 2011-11-28 2012-06-20 Tcl集团股份有限公司 Hand motion detecting method, hand motion detecting device and human-computer interaction system
CN102521579A (en) * 2011-12-21 2012-06-27 Tcl集团股份有限公司 Method for identifying pushing action based on two-dimensional planar camera and system
US20150117725A1 (en) * 2013-05-21 2015-04-30 Tencent Technology (Shenzhen) Company Limited Method and electronic equipment for identifying facial features
CN103839046A (en) * 2013-12-26 2014-06-04 苏州清研微视电子科技有限公司 Automatic driver attention identification system and identification method thereof
CN104156717A (en) * 2014-08-31 2014-11-19 王好贤 Method for recognizing rule breaking of phoning of driver during driving based on image processing technology

Also Published As

Publication number Publication date
CN108205651B (en) 2021-04-06

Similar Documents

Publication Publication Date Title
US11341669B2 (en) People flow analysis apparatus, people flow analysis system, people flow analysis method, and non-transitory computer readable medium
US10810414B2 (en) Movement monitoring system
CN109670441B (en) Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet
US10482613B2 (en) Movement monitoring system
CN107679503A (en) A kind of crowd's counting algorithm based on deep learning
CN105631455A (en) Image main body extraction method and system
CN104049760B (en) The acquisition methods and system of a kind of man-machine interaction order
Gibert et al. Face detection method based on photoplethysmography
CN108197534A (en) A kind of head part's attitude detecting method, electronic equipment and storage medium
CN110674680B (en) Living body identification method, living body identification device and storage medium
US20140301602A1 (en) Queue Analysis
US11308348B2 (en) Methods and systems for processing image data
US11450148B2 (en) Movement monitoring system
Lin et al. Cross camera people counting with perspective estimation and occlusion handling
CN112001241A (en) Micro-expression identification method and system based on channel attention mechanism
CN111178276A (en) Image processing method, image processing apparatus, and computer-readable storage medium
CN115512134A (en) Express item stacking abnormity early warning method, device, equipment and storage medium
CN103049748B (en) Behavior monitoring method and device
Gal Automatic obstacle detection for USV’s navigation using vision sensors
CN113197558B (en) Heart rate and respiratory rate detection method and system and computer storage medium
US11587361B2 (en) Movement monitoring system
CN108205652A (en) A kind of recognition methods of action of having a meal and device
CN108205651A (en) A kind of recognition methods of action of having a meal and device
JP6851246B2 (en) Object detector
Bhakt et al. A novel framework for real and fake smile detection from videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant