CN102081918A - Video image display control method and video image display device - Google Patents

Video image display control method and video image display device Download PDF

Info

Publication number
CN102081918A
CN102081918A CN 201010612804 CN201010612804A CN102081918A CN 102081918 A CN102081918 A CN 102081918A CN 201010612804 CN201010612804 CN 201010612804 CN 201010612804 A CN201010612804 A CN 201010612804A CN 102081918 A CN102081918 A CN 102081918A
Authority
CN
China
Prior art keywords
palm
image
hand shape
hand
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010612804
Other languages
Chinese (zh)
Other versions
CN102081918B (en
Inventor
方伟
赵勇
袁誉乐
罗卫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Rui Technology Co., Ltd.
Original Assignee
Peking University Shenzhen Graduate School
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Shenzhen Graduate School filed Critical Peking University Shenzhen Graduate School
Priority to CN 201010612804 priority Critical patent/CN102081918B/en
Publication of CN102081918A publication Critical patent/CN102081918A/en
Application granted granted Critical
Publication of CN102081918B publication Critical patent/CN102081918B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a video image display control method and a video image display device, and the video image display control method comprises the following steps: collecting a scene before the display device in a real-time manner; acquiring a human body region image from a collected real-time scene image; further performing hand gesture detection on the human body region image, and determining a control command in a hand gesture database, which corresponds to a hand gesture according to detection result; and finally outputting the control command, and utilizing the video image display device to control a video image to be displayed on the display device according to the control command, thereby completing active interaction between a user and the video image, enabling the user to select required information according to interests, improving the interaction efficiency between the user and advertisement content and simultaneously bringing a new experience to the user.

Description

A kind of video image display control method and video image display
Technical field
The present invention relates to Flame Image Process and field of human-computer interaction, relate in particular to a kind of video image display control method and video image display.
Background technology
All kinds of in recent years advertising media dog-eat-dog, and digital billboard stands in the breach as a kind of brand-new advertising media.The numeral billboard is as the product under a kind of advertising media numeral development trend, it is a kind of Digital Media System of issuing various advertising messages by terminal presentation facility, having ad content dynamically throws in, the autonomous service of satisfying personalized and differentiation, specific crowd is carried out the characteristic of advertisement information play in specific place and time, thereby obtained good demonstration effect, in the market, the market application potential of the public place that converges of supermarket, hotel, medical treatment, movie theatre and other stream of peoples is very big, has wide market outlook.
Current digital billboard all is by automatic playing advertisements picture of predefined broadcast mode or video cartoon fragment, when pedestrian way is out-of-date, can only see the content that current billboard is shown, can not see own interested ad content with the wish of oneself.If wonder the advertisement content that other do not show, need stop and wait for the long time, this is a kind of passive acceptance and can't predicts the interactive mode of ad content, and people often can not be easy to obtain the useful ad content oneself wanted, and the effect of advertisement is also just had a greatly reduced quality like this.
Summary of the invention
The main technical problem to be solved in the present invention is that a kind of video image display control method and video image display are provided.The present invention has realized that the active between user and the video image is mutual, allows user oneself easily select information of interest, thus the interactive efficiency of the information of raising.
For solving the problems of the technologies described above, the technical solution used in the present invention is as follows:
A kind of video image display control method comprises step:
Real-time scene image before A, the collection display device;
B, described real-time scene image is carried out human detection, and obtain the human region image;
C, in described human region image, detect gesture;
D, determine the pairing control command of described gesture;
E, according to the demonstration of described control command control of video image on display device.
Wherein, described step B comprises: thus the current real-time scene picture frame that obtains is detected the human region image with comparing according to the reference picture of background model gained.
Further, the step of described human body area image comprises:
Current real-time scene picture frame that obtains and the reference picture according to the background model gained are carried out the subduction operation of Pixel-level, obtain difference image;
Described difference image is carried out binary conversion treatment, obtain the binaryzation difference image;
Described binaryzation difference image is carried out morphology to be handled;
To meet the predetermined binaryzation difference image that is communicated with rule and be communicated with processing, obtain connected region;
Judge whether each connected region is the noise range, if then deletion;
The area image that to be made up of all connected regions that stay at last is as the human region image, and exports described human region image.
Further, above-mentioned method also comprises step: judge whether each pixel in the current real-time scene picture frame that obtains belongs to the pixel in the detected human region, if then background model remains unchanged, otherwise update background module.
Wherein, described gesture comprises the hand shape of palm, and described step C comprises:
Described human region image is carried out the palm target detection, and obtain the palm target area image;
Described palm target area image is carried out hand-shaped characteristic to be extracted;
Hand shape sorter according to the hand-shaped characteristic of the described palm that extracts and foundation in advance carries out the identification of hand shape, judges whether the hand shape of described palm is effective hand shape;
In step D, when the hand shape of judging this palm is effective hand shape, determine the control command of this effective hand shape correspondence according to the gesture database of setting up in advance; Or
Described gesture comprises the hand shape of palm and the movement locus of palm, and described step C comprises:
Described human region image is carried out the palm target detection, and obtain the palm target area image;
Described palm target area image is carried out hand-shaped characteristic to be extracted;
Carry out the identification of hand shape according to the hand-shaped characteristic of this palm that extracts and the hand shape sorter set up in advance, judge whether the hand shape of this palm is effective hand shape, when the hand shape of this palm of judgement is effective hand shape, this palm is designated current activation palm;
Detect the movement locus of current activation palm, determine the type of sports of current activation palm;
In step D, in the gesture database of setting up in advance, determine corresponding control command according to the type of sports of this effective hand shape and current activation palm;
In step e, switch corresponding video image or current video image displayed is operated according to described control command.
Further, the step that described human region image is carried out the palm target detection comprises:
Described human region image section is carried out Face Detection, obtain and comprise people's face, arm or palm area image;
Obtain the area image of arm and/or palm according to the Face Detection model of setting up in advance;
In the area image of arm and/or palm, detect palm.
Further, the described step that detects palm in the area image of arm and/or palm comprises:
Whether the length breadth ratio of area image of judging described arm and/or palm is greater than 2, if judge that then this zone is arm and palm area image, otherwise be the palm area image;
When being judged to be arm and palm area image, described arm and palm area image are carried out rim detection, obtain marginal information, obtain region contour;
Described region contour is carried out minimum external ellipse fitting, obtain the information of described external ellipse;
According to the information of described external ellipse, obtain the directional information of described region contour, thereby finally obtain the sensing of described arm and palm;
Described arm regions image and the palm area image that obtains sensing carried out the image rectification, make arm and palm be oriented to straight up;
On described arm after the rectification and palm area image, carry out the palm detection and localization, obtain the palm target area image.
Corresponding to above-mentioned method, the present invention also provides a kind of video image display, comprising:
Camera head is used to gather the preceding real-time scene image of display device;
Human body detection device is used for described real-time scene image is carried out human detection, obtains the human region image;
Hand gesture detecting device is used for detecting gesture at described human region image;
Control command is determined device, is used for determining the pairing control command of gesture;
Image display control apparatus is used for according to the demonstration of described control command control of video image on display device.
Further, thus described human body detection device is used for the current real-time scene picture frame that obtains is detected the human region image with comparing according to the reference picture of background model gained.
Above-mentioned video image display also comprises the context update device, is used for judging whether current each pixel of real-time scene picture frame that obtains belongs to the pixel in the detected human region, if then background model remains unchanged, otherwise update background module.
Wherein, described gesture comprises the hand shape of palm, and described hand gesture detecting device comprises:
The palm detecting unit is used for described human region image is carried out the palm target detection, and obtains the palm target area image;
The hand-shaped characteristic extraction unit is used for that described palm target area image is carried out hand-shaped characteristic and extracts;
Hand shape recognition unit is used for carrying out the identification of hand shape according to the hand shape sorter of the hand-shaped characteristic of this palm that extracts and foundation in advance, judges whether the hand shape of this palm is effective hand shape;
Described control command determines that device when the hand shape of judging this palm is effective hand shape, determines the control command of this effective hand shape correspondence according to the gesture database of setting up in advance; Or
Described gesture comprises the hand shape of palm and the movement locus of palm, and described hand gesture detecting device comprises:
The palm detecting unit is used for described human region image is carried out the palm target detection, and obtains the palm target area image;
The hand-shaped characteristic extraction unit carries out hand-shaped characteristic to described palm target area image and extracts;
Hand shape recognition unit, hand shape sorter according to the hand-shaped characteristic of this palm that extracts and foundation in advance carries out the identification of hand shape, whether the hand shape of judging this palm is effective hand shape, when the hand shape of judging this palm is effective hand shape, this palm is designated current activation palm;
The palm tracking cell is used to detect the movement locus of current activation palm, determines the type of sports of current activation palm;
Described control command determines that device is used for the gesture database of setting up in advance, determines corresponding control command according to the type of sports of this effective hand shape and current activation palm;
Described image display control apparatus switches corresponding video image according to described control command or current video image displayed is operated.
The invention has the beneficial effects as follows:
Video image display control method of the present invention and video image display, by the scene before the video image display is gathered, and extraction human region image wherein, from this human region image, extract the corresponding gesture of user again, thereby determine its control commands corresponding according to this gesture, the video image display is controlled corresponding video image according to this control command again and is shown, thereby the active of having finished between user and the video image is mutual.The user can optionally check according to own interested content on one's own initiative by method and apparatus of the present invention.The technical solution used in the present invention makes and has realized initiatively alternately between user and the device, improved the interactive efficiency between video image and the user, thereby the effect of publicity that has improved video image self is brought complete new experience to the user simultaneously.
Description of drawings
Fig. 1 is the block diagram of a kind of embodiment of video image display of the present invention;
Fig. 2 is the block diagram of the another kind of embodiment of video image display of the present invention;
Fig. 3 a is the block diagram of a kind of embodiment of hand gesture detecting device among Fig. 1;
Fig. 3 b is the block diagram of the another kind of embodiment of hand gesture detecting device among Fig. 1;
Fig. 4 is the synoptic diagram of a kind of embodiment of palm detecting unit among Fig. 1;
Fig. 5 is the process flow diagram of a kind of embodiment of video image display control method of the present invention;
Fig. 6 is for obtaining the process flow diagram of human region image among Fig. 5;
Fig. 7 is for obtaining the process flow diagram of difference image among Fig. 6;
Fig. 8 is the process flow diagram of regional connectivity analysis among Fig. 6;
Fig. 9 is the process flow diagram of update background module among Fig. 7;
Figure 10 is the process flow diagram of gestures detection among Fig. 5;
Figure 11 is the process flow diagram that obtains of palm target area among Figure 10;
The process flow diagram that Figure 12 locatees and obtains for palm among Figure 11;
Figure 13 is a process flow diagram of determining the palm type of sports among Figure 11;
Figure 14 a, Figure 14 b, Figure 14 c, Figure 14 d, Figure 14 e and Figure 14 f are corresponding to the location of Figure 12 palm and the synoptic diagram of a kind of embodiment that obtains;
Figure 15 a, Figure 15 b, Figure 15 c, Figure 15 d, Figure 15 e, Figure 15 f, Figure 15 g, Figure 15 h and Figure 15 i are the synoptic diagram of a kind of embodiment that activates the type of sports classification of palm among Figure 13;
Figure 16 is a synoptic diagram of determining a kind of embodiment of control command among Fig. 6.
Embodiment
In conjunction with the accompanying drawings the present invention is described in further detail below by embodiment.
In recent years, computer vision technique has developed to such an extent that reach its maturity and has been used widely in a lot of fields, under this background, come thereby the hand shape and gesture of human body is discerned the action behavior of understanding and explaining the people by computer vision, and then finish between humans and machines also become possibility alternately, the present invention promptly is based on the video image display control method and the video image display of this computer vision technique.
Please refer to Fig. 1, a kind of embodiment of a kind of video image display of the present invention, comprise: camera head 1, human body detection device 2, hand gesture detecting device 3, control command are determined device 4 and image display control apparatus 5, wherein camera head 1 links to each other with human body detection device 2, this human body detection device 2 links to each other with hand gesture detecting device 3, this hand gesture detecting device 3 determines that with control command device 4 links to each other, and control command determines that device 4 links to each other with image display control apparatus 5.Wherein, camera head 1 is used for the real-time scene image before the images acquired display control unit 5, and sends to human body detection device 2; Human body detection device 2 is used for the real-time scene image that receives is carried out human detection, obtains the human region image, and sends to hand gesture detecting device 3; Hand gesture detecting device 3 is used for the human region image that receives is carried out gestures detection, and this gesture is sent to control command determines device 4; Control command determines that device 4 is used for determining corresponding control command according to the gesture that receives, and this control command is sent to image display control apparatus 5; Image display control apparatus 5 is used for according to this demonstration of control command control of video image on display device.
Please refer to Fig. 2, among the another kind of embodiment of the present invention, this video image display also comprises: the context update device 6 that links to each other with human body detection device 2, be used for judging whether current each pixel of real-time scene picture frame that obtains belongs to the pixel in the detected human region image, if then background model remains unchanged, otherwise update background module.
Please refer to Fig. 3 a, among a kind of embodiment of the present invention, when the gesture of hand gesture detecting device 3 detections comprises the hand shape of palm, hand gesture detecting device 3 comprises: palm detecting unit 31, hand-shaped characteristic extraction unit 32 and hand shape recognition unit 33, this palm detecting unit 31 links to each other with hand-shaped characteristic extraction unit 32, the human region image that is used for human body detection device 2 is obtained carries out the palm target detection, and obtains the palm target image, sends to hand-shaped characteristic extraction unit 32 again; Hand-shaped characteristic extraction unit 32 links to each other with hand shape recognition unit 33, is used for that the palm target image that receives is carried out hand-shaped characteristic and extracts, and send to hand shape recognition unit 33; Hand shape recognition unit 33 determines that with control command device 4 links to each other, hand shape sorter according to the hand shape of the palm that receives and foundation in advance carries out the identification of hand shape, whether the hand shape of judging this palm is effective hand shape, if effectively, then control command determines that device 4 determines the pairing control command of this efficient database according to the gesture database of setting up in advance.5 of image display control apparatus switch corresponding video image according to this control command or current video image displayed are operated, current video image displayed can be the video image that switched without the user, also can be the video image after just having switched according to user's gesture.
Please refer to Fig. 3 b, among the another kind of embodiment of the present invention, when the gesture that detects when hand gesture detecting device 3 comprised the movement locus of the hand shape of palm and palm, this hand shape pick-up unit 3 comprised: palm detecting unit 31, hand-shaped characteristic extraction unit 32, hand shape recognition unit 33 and the palm tracking cell 34 that links to each other with hand shape recognition unit 33.When hand shape recognition unit 33 judges that the hand shape of palms is effective hand shape, then this palm is designated the activation palm, and sends to palm tracking cell 34 and control command is determined device 4; Palm tracking cell 34 determines that with control command device 4 links to each other, and is used to detect the movement locus of the activation palm of reception, and determines the type of sports of current activation palm; Type of sports and effective hand shape that control command is determined 4 these current activation palms of basis of device are determined corresponding control command in the gesture database of setting up in advance.5 of image display control apparatus switch corresponding video image according to this control command or current video image displayed are operated.
Please refer to Fig. 4, among a kind of embodiment of the present invention, palm detecting unit 31 comprises Face Detection module 311, people's face detection module 312 and palm target acquisition module 313, wherein Face Detection module 311 links to each other with people's face detection module 312, the human region image that is obtained according to the human body complexion feature detection, and extract people's face, palm and/or arm regions; People's face detection module 312 links to each other with palm target acquisition module 313, be used for human face region being detected from the zone that has obtained, and testing result sent to palm target acquisition module 313, palm target acquisition module 313 is deleted human face region according to testing result, and obtains the palm target area image.
Please refer to Fig. 4, when the gesture that detects when hand gesture detecting device 3 comprises the movement locus of the hand shape of palm and palm, this palm target acquisition module 313 comprises that palm recognin module 3131 and coupled palm obtain submodule 3132, this palm area recognin module 3131 is used for from the palm and/or the arm regions of deleting human face region, judge whether this zone only comprises the zone of palm, in this way, then discern the palm target area image, otherwise this area image is identified as palm and arm regions image, and send it to palm and obtain submodule 3132, obtain submodule 3132 by palm and from this palm and arm regions image, obtain the palm target area image.
Among the another kind of embodiment of the present invention, palm detecting unit 31 also comprises the palm target correcting module 314 that links to each other with palm target acquisition module 313, be used for the palm target area image that palm target acquisition module 313 obtains is carried out the regional connectivity analysis, thereby obtain complete palm target area image.
Based on above video image display, the present invention proposes a kind of video image display control method.Below in conjunction with the drawings and specific embodiments this method is described in detail.
Please refer to Fig. 5, a kind of video image display control method comprises step:
Real-time scene image before S1, the collection display device.
S2, this real-time scene image is carried out human detection, and obtain the human region image.
S3, in this human region image, detect gesture.
S4, determine the pairing control command of this gesture.
S5, according to this demonstration of control command control of video image on display device.
In an embodiment of the present invention, when collecting a two field picture, also this image is carried out buffer memory, so also comprises after gathering the real-time scene image in the present embodiment: step S6 with the real-time scene image buffer storage gathered in the frame data buffer zone.
In order to carry out better controlled to view data, thereby guarantee the smoothness of data acquisition and processing (DAP), frame data buffer zone in the present embodiment has adopted the double buffering queue technology of video flowing, takes out buffer zone separately thereby deposit frame image data in buffer zone and data.
In an embodiment of the present invention,, need carry out pre-service, comprise step the real-time scene image of gathering in order to obtain more accurate image:
S7, the color space of the real-time scene image gathered is transformed into HSV from RGB.
For the ease of the human detection among the step S2, again because the colour of skin is quite concentrated in the distribution of color space, but can be subjected to throwing light on and the very big influence of ethnic group, influenced by illumination intensity in order to reduce the colour of skin, therefore, in the present embodiment, the real-time scene image is carried out color space conversion certain color space to brightness and chrominance separation, abandon luminance component then.
Because the HSV space be with color tone (Hue, H), saturation degree (Saturation, S) and brightness (Value, V) three elements are represented, belong to non-linear color representation system.HSV color representation method is with the perception unanimity of people to color, and in the HSV space, the people is more even to the perception of color, therefore, the HSV space is suitable for the color space of human vision property, rgb space is converted to HSV after, make message structure compact more, the independence of each component strengthens, and colouring information is lost few.Therefore, adopt the hsv color space in the present embodiment.
Certainly the color space model in the present embodiment can also be other color spaces, for example YCbCr etc.
Rgb space is as follows to the transformational relation in HSV space, establish R, G, B between [0,1]:
V=Max(R,G,B)
Figure BDA0000041550680000081
Figure BDA0000041550680000082
S8, the image that will carry out after the color space conversion have carried out denoising, adopt the mode of medium filtering that this image is carried out denoising in the present embodiment
Owing to have noise etc. in the real-time scene image that step S1 gathers, therefore,, need this image is carried out denoising for the image that better obtains.
Please refer to Fig. 6, in an embodiment of the present invention, human detection among the step S2, and obtain the human region image and comprise step:
S21, with current real-time scene picture frame that obtains and the subduction operation of carrying out Pixel-level according to the reference picture of background model gained, obtain difference image.
S22, this difference image is carried out binary conversion treatment, obtain the binaryzation difference image.
S23, this binaryzation difference image is carried out morphology handle.
In some cases, the direction of taking as video camera has comprised some black holes and noise spot in the preliminary difference binary image that obtains during with human motion direction basically identical, and the difference binary image that therefore needs tentatively to obtain is done the morphology processing.
In an embodiment of the present invention, step S23 morphology is handled and is comprised: adopt the corrosion operation to remove this to noise spot isolated in the binaryzation difference image, adopt expansive working to fill hollow sectors in this binaryzation difference image.Wherein, the structural element of corrosion operation and expansive working is got length and width and is respectively 3 decussate texture element.
S24, will meet the predetermined binaryzation difference image that is communicated with rule and be communicated with processing, thereby obtain connected region.
Owing to carry out having comprised some scattered zone or pixels in the image after the binary conversion treatment, therefore, need be communicated with processing by the image that the regional connectivity analysis will meet pre-defined rule.In the present embodiment, the predetermined rule that is communicated with adopts 8-to be communicated with rule among the step S24, and this pre-defined rule can also be that other are communicated with rule certainly, and for example 4-is communicated with rule.
S25, judge that whether the area interior pixel number summation of each connected region is less than setting threshold, then this connected region is considered as the noise range in this way, and delete this connected region, the area image that all connected regions that then stay are at last formed is the human region image, exports this human region image.Wherein setting threshold can rule of thumb be provided with.
Owing in directly detecting the gesture process, tend to exist noise in the palm area image of extraction, and these noises are very near palm, thereby influence the judgement to gesture.In order to obtain more accurate gesture, the present invention has adopted and has at first carried out human detection, detects gesture again, thus in carrying out the human detection process with noise remove, make that detected gesture is more accurate.
Because human body may be in constant motion, also changing with respect to its background image of scene image of gathering each time, in order to obtain accurate more background image, just background model need be upgraded.
Therefore, in another embodiment of the invention, step S2 also comprises step:
S26, judge in the current real-time scene image that obtains whether a pixel belongs to the pixel in the detected human region, and then background model remains unchanged in this way, otherwise update background module.
Please refer to Fig. 7, among a kind of embodiment of the present invention, step S21 comprises step:
S211, obtain pretreated image.
S212, judge whether the current background model is set up, execution in step S213 then in this way, otherwise execution in step S214.
S213, with the current pretreated two field picture f that obtains k(x, y) in the pixel value of each pixel, and corresponding to background reference image b according to the background model gained k(x, y) in the pixel value of each pixel reduce operation, obtain difference image D k(x y), then has D k(x, y)=| f k(x, y)-b k(x, y) |.
S214, set up Model B=[μ, δ that a single Gaussian distribution of usefulness is represented for each pixel 2], wherein μ is an average, δ 2Be variance.
S215, output difference image.
In an embodiment of the present invention, among the step S22 difference image being carried out binary conversion treatment is:
S221, set in advance an image segmentation threshold value T=k δ, every the pixel value and the predetermined threshold value of difference image compared, predetermined threshold value can rule of thumb be provided with, or calculates according to existing adaptive algorithm.Threshold value T is made as 3 times of sizes of the standard deviation of current pixel point pixel value in the present embodiment.
S222, pixel value and this segmentation threshold T of each pixel in the difference image compared, and this difference image is cut apart, thereby obtain the binaryzation difference image according to comparative result
M k ( x , y ) = 1 foreground D k ( x , y ) > T 0 background otherwise .
The pixel value that adopts current pixel point in the present embodiment is greater than this threshold value T, and then its pixel value is set to 1; The pixel value of current pixel point is smaller or equal to this threshold value T, and then its pixel value is set to 0, thereby difference image has been carried out binaryzation, promptly obtains the binaryzation difference image.
Certainly, can pixel value be set to 0 greater than the pixel of threshold value in the present embodiment, pixel value is set to 1 smaller or equal to the pixel of threshold value and also is fine.
Please refer to Fig. 8, among a kind of embodiment of the present invention, among the step S24 binaryzation difference image carried out the connected region analysis and comprise step:
S241, according to from top to bottom, order from left to right scans current binaryzation difference image.
S242, judge whether current pixel point is the foreground point, in this way, then it is labeled as a new ID, otherwise execution in step S241.
The foreground point here is the pixel that is changed by the caused pixel value of the appearance of human motion corresponding in the current real scene.
S243, judge whether the pixel on the 8-communication direction of this foreground point is the foreground point, in this way, then it is labeled as identical ID, and adds stacked Stack.
S244, judged above-mentioned 8 pixels after, check whether stack is empty, and if not empty then ejects stack top element, if sky then finishes scanning and execution in step S246.
S245, the 8-above the pixel that ejects continued are communicated with and judge, constantly repeat top process, be sky until stack, have obtained having the foreground area of identical ID.
S246, behind the entire image end of scan, just obtained all connected regions, and each connected region all has unique sign ID.
Please refer to Fig. 9, the step S26 update background module in the present embodiment comprises step:
S261, obtain foreground mask, promptly obtain pixel value and be 1 pixel.
S262, judge whether this pixel is the pixel that belongs in the human region that obtains among the step S26, in this way, execution in step S263 then, otherwise execution in step S264.
The parameter constant of the statistical model of S263, maintenance background pixel point.Establishing current frame image in the present embodiment is I i, α is a learning rate, and μ is an average, and δ is a standard deviation, and its context update formula is:
μ i+1=μ i
δ i + 1 2 = δ i 2 .
S264, the parameter of the statistical model of background pixel point is upgraded the more new formula of then having powerful connections:
μ i+1=(1-α)μ i+αI i
δ i + 1 2 = ( 1 - α ) δ i 2 + α ( I i - μ i ) 2 ,
Wherein, learning rate α can be made as 0.002 in the present embodiment, can certainly be made as other values.
Please refer to Figure 10, in an embodiment of the present invention, when the gesture among the step S3 comprised the hand shape of palm, step S3 comprised:
S31, carry out the palm target detection, and obtain the palm target area image obtaining human region.
S32, this palm target area image is carried out hand-shaped characteristic extract.
S33, according to the hand-shaped characteristic of this extraction and the hand shape sorter of setting up in advance carry out the identification of hand shape, judge whether the hand shape of this palm is effective hand shape, then execution in step S4.
Please refer to Figure 11, among a kind of embodiment of the present invention, carry out the palm target detection among the step S31 and obtain the palm target area image comprising step:
S311, the human region image that obtains is carried out Face Detection, obtain and comprise people's face, palm or arm regions image.
Because the tone of human body skin is distributed in certain scope, can people's face and arm palm portion be extracted from human region by features of skin colors.
Because the colour of skin is quite concentrated in the distribution of color space, but can be subjected to throwing light on and the very big influence of ethnic group, influenced by illumination intensity in order to reduce the colour of skin, therefore, present embodiment has carried out the color of image space conversion with scene image in step S6 be HSV, thereby with brightness and chrominance separation.Simultaneously, for avoiding the influence that brightness in the same camera lens changes and other brightness that cause change, thereby in the present embodiment, abandon luminance component when carrying out Face Detection in step S311, the H component of only selecting image is as detecting foundation.
Cut apart skin pixel according to colour of skin cluster on the H component again, promptly make the threshold value in HSV space, and carry out cutting apart of area of skin color, thereby people's face, palm and/or arm regions are distinguished according to this threshold value according to statistical study.
S312, from the above-mentioned zone image, choose a zone.
S313, according to the faceform who sets up in advance people's face is carried out in this zone and detect, as detect people's face and then this zone is abandoned, and execution in step S314, otherwise export this palm and/or arm regions image, and execution in step S315.
S314, judge whether to be still waiting surveyed area, in this way, execution in step S313 then, otherwise end operation.
If S315 judges the length breadth ratio of this area image and is not more than 2, judge that then this area image is the palm target area image, and execution in step S317; Otherwise judge that this zone is palm and arm regions image, and execution in step S316.
S316, employing palm location algorithm position the palm in this palm and the arm regions, and obtain this palm area.
In order to obtain complete palm area image, among a kind of embodiment of the present invention, also comprise among the step S315:
When being judged to be the palm area image, execution in step S318 then: this palm area image is carried out the regional connectivity analysis, thereby obtain complete palm area image, execution in step S317 again;
When being judged to be palm and arm regions image, execution in step S318 before execution in step S316 promptly carries out the regional connectivity analysis to this palm and arm regions image, thereby obtains complete palm and arm regions image.
In the present embodiment, this connected component analysis adopts 8 to be communicated with rules and to be communicated with processing: judge on the seed points coordinate in the primitive frame image pixel and on every side the value of the H component of 8 neighbor pixels whether less than setting threshold, in this way, then be regarded as belonging to same class pixel, join in the connected region, obtain complete palm and/or arm regions image.
Adopt people's face to detect in this example and deleted human face region.Wherein people's face detects and comprises two kinds of methods:
One is based on the method for detecting human face of knowledge: by detecting the position of different people face portion feature, locate people's face according to some knowledge rules then, because always there is certain rules in the distribution of the local feature of people's face, for example eyes always are being symmetrically distributed in people's half part etc. on the face, so can utilize one group to describe rule that people's face local feature distributes and carry out that people's face detects, and from bottom to top two kinds detect strategy from top to bottom.
Two are based on the method for presentation: because people's face has the unified structure pattern, and the realization of sorter can adopt different strategies, as adopting neural network method and traditional statistical method etc.Therefore, at first by study, on the basis of a large amount of training sample sets, set up the sorter that an energy is correctly discerned people's face and non-face sample, detected image is carried out whole scan then, detect the image window that scans with sorter and whether comprise people's face, if have, then provide the position at people's face place.
In an embodiment of the present invention, people's face detects the method that has adopted based on presentation, comprising: S313a, a large amount of facial image samples of off-line collection; S313b, extract the multidimensional characteristic vectors of people's face again, and adopt PCA method (Principal Component Analysis, principal component analysis (PCA)) dimensionality reduction; This proper vector that S313c, utilization are extracted is trained neural network and is obtained people's face sorter; S313d, in people's face sorter, this human region image is carried out people's face according to above-mentioned proper vector again and detect; S313e, be people's face,, thereby obtain palm and/or arm regions image then with the human face region deletion as detecting.
Please refer to Figure 12, among a kind of embodiment of the present invention, carry out the palm location among the step S316 and obtain comprising step:
S316a, employing Canny operator carry out rim detection to this palm and arm regions image, obtain marginal information, obtain region contour, shown in Figure 14 a.
S316b, this region contour is carried out minimum external ellipse fitting, obtains this external oval information, comprising: major axis, minor axis, with the angle angle of transverse axis, shown in Figure 14 b.
S316c, according to this external long axis of ellipse with obtain the directional information of this region contour with the angle angle of transverse axis, thereby finally obtain wherein arm and the sensing of palm, shown in Figure 14 c.
S316d, by the image geometry space coordinate transformation image is carried out in this zone that has obtained to point to and correct, make arm and palm be oriented to straight up, shown in Figure 14 d.
S316e, arm and palm area after correcting are carried out the palm detection and localization, and obtain the palm target area image.
Shown in Figure 14 e and Figure 14 f, adopt the palm location algorithm that palm is positioned in the present embodiment, be specially: the edge pixel of this palm and arm regions is carried out projection operation on the vertical direction, find palm place end; Again all pixels of this palm and arm regions are carried out projection operation on the vertical direction, and begin to seek peak point on projection coordinate's axle from palm place end; With the valley point that occurs behind this peak point cut-point as arm and palm; According to this cut-point this palm and arm regions are carried out cutting apart on the vertical direction, obtain palm portion, promptly obtain the palm target area image thereby remove arm.
S317, output palm target area image, and execution in step S32 carries out the hand-shaped characteristic extraction to this palm target area image.
Please refer to Figure 10, in an embodiment of the present invention, when if the gesture among the step S3 comprises the hand shape of palm, after then carrying out the hand-shaped characteristic extraction, step S33 comprises: the hand shape sorter according to hand-shaped characteristic that extracts and foundation in advance carries out the identification of hand shape, whether the hand shape of judging this palm is effective, as execution in step S4 effectively then: determine the control command of this effective hand shape correspondence according to the gesture database of setting up in advance, otherwise abandon this hand shape.
Please refer to Figure 13, in another embodiment of the invention, when if the gesture among the step S3 comprises the movement locus of the hand shape of palm and palm, after carrying out the hand-shaped characteristic extraction, step S33 also comprises: this palm is designated the activation palm, and follow the tracks of the movement locus of current activation palm, to determine the type of sports of current activation palm.
When the hand shape of judging current palm is effective, execution in step S4 then: in the gesture database of setting up in advance, determine control commands corresponding according to the type of sports of current activation palm.
Last execution in step S5: switch corresponding video image or current video image is operated according to the control command of determining.
Among a kind of embodiment of the present invention, gesture is divided into static and motion, when be static gesture, then obtains control commands corresponding according to effective hand shape; Determine its type of sports when gesture for the elder generation that then needs that moves, type of sports and/or the effective hand shape according to palm obtained control commands corresponding then.Wherein, motion has comprised again upwards, has waited downwards, left or to the right.
Effective hand shape: N1, left the five fingers palm, right the five fingers palm are shown in Figure 15 c; N2, left the five fingers palm, right fist are shown in Figure 15 d; N3, left the five fingers palm, right refer to palm, shown in Figure 15 e; N4, the first from left refer to palm, right the five fingers palm, shown in Figure 15 f.
Type of sports: motion left comprises that M1, single the five fingers palm are moved to the left, shown in Figure 15 b; Move right and comprise that M2, single the five fingers palm move right, shown in Figure 15 a; Move right left and comprise that M3, left the five fingers palm are moved to the left, right the five fingers palm moves right, and shown in Figure 15 g, and M4, left the five fingers palm move right, and right the five fingers palm is moved to the left, shown in Figure 15 h; Be association of activity and inertia: NAM, left the five fingers palm transfixion, right refer to that palm moves, shown in Figure 15 i.
Certainly type of sports can also be other in the present embodiment.
As shown in figure 13, among a kind of embodiment of the present invention, wherein the foundation of hand shape sorter and training comprise: a large amount of hand shape image sample sets of off-line collection; Extract hand-shaped characteristic wherein; Utilize the hand-shaped characteristic that obtains neural network to be trained the sorter that obtains hand shape again.
In the present embodiment, each above-mentioned sample set is the image template of the different hand shape of representative; Above-mentioned hand-shaped characteristic comprises: hand shape profile, hand shape curvature, hand shape girth, hand shape area, hand shape convex-concave degree, the projection of hand shape edge-perpendicular, hand shape edge horizontal projection.Certainly the hand-shaped characteristic in the present embodiment can also be other feature.Neural network in the present embodiment has adopted the three-layer neural network model, can certainly use other neural network models.
Please refer to Figure 16, among a kind of embodiment of the present invention, step S4 determines that control command comprises:
The type of sports of the activation palm of S41, obtaining step S3 sign.
S42, activate the type of sports of palm, in the hand shape and gesture database of setting up in advance, search corresponding gesture,, then obtain and the corresponding order of this gesture, otherwise do not do any action if in this database, successfully find corresponding gesture according to this.This order comprises the operation that this gesture will be finished and the object of operation.
S43, judge that this operand is video cartoon file or picture file, if the video cartoon file, if execution in step S44 then is image file execution in step S45 then.
S44, understanding are also explained this gesture, and the output control commands corresponding, for example:
When if the gesture of current activation palm is M1, then it plays a last video cartoon file corresponding to the control command in the gesture database for switching to, and the output control commands corresponding;
When if the gesture of current activation palm is M2, then this gesture is interpreted as and plays next video cartoon file, and the output control commands corresponding;
When if the gesture of current activation palm is N1, then this gesture is interpreted as and plays the current video animation file, and the output control commands corresponding;
When if the gesture of current activation palm is N2, then this gesture is interpreted as to suspend and plays the current video animation file, and the output control commands corresponding;
If when the gesture of current activation palm was N3, then this gesture was interpreted as fast-forward play current video animation file, and the output control commands corresponding;
If when the gesture of current activation palm was N4, then this gesture was interpreted as the fast reverse play current video image, and the output control commands corresponding.
S45, understanding are also explained this gesture, and the output control commands corresponding, for example:
When if the gesture of current activation palm is M1, then this gesture is interpreted as and shows a last pictures, the output control signal corresponding;
When if the gesture of current activation palm is M2, then this gesture is interpreted as and shows next pictures, the output control commands corresponding;
If when the gesture of current activation palm was M3, then this gesture was interpreted as the amplification picture, the output control commands corresponding;
If when the gesture of current activation palm was M4, then this gesture was interpreted as and dwindles picture, the output control commands corresponding;
If when the gesture of current activation palm was NAM, then this gesture was interpreted as mobile picture, the output control commands corresponding.
By video image display control method of the present invention, the user only need make corresponding gesture, comprise gesture static or motion, show with the video image of need selecting, perhaps current display video image is operated, make and realized between user and the video image display initiatively alternately, improved the interactive efficiency between video image and the user.
Above-mentioned a kind of video image display control method can be used for the demonstration of video ads picture or animation, also can be used for the demonstration of other picture or animation.
Above content be in conjunction with concrete embodiment to further describing that the present invention did, can not assert that concrete enforcement of the present invention is confined to these explanations.For the general technical staff of the technical field of the invention, without departing from the inventive concept of the premise, can also make some simple deduction or replace, all should be considered as belonging to protection scope of the present invention.

Claims (11)

1. a video image display control method is characterized in that, comprises step:
Real-time scene image before A, the collection display device;
B, described real-time scene image is carried out human detection, and obtain the human region image;
C, in described human region image, detect gesture;
D, determine the pairing control command of described gesture;
E, according to the demonstration of described control command control of video image on display device.
2. the method for claim 1 is characterized in that, described step B comprises: thus the current real-time scene picture frame that obtains is detected the human region image with comparing according to the reference picture of background model gained.
3. method as claimed in claim 2 is characterized in that, the step of described human body area image comprises:
Current real-time scene picture frame that obtains and the reference picture according to the background model gained are carried out the subduction operation of Pixel-level, obtain difference image;
Described difference image is carried out binary conversion treatment, obtain the binaryzation difference image;
Described binaryzation difference image is carried out morphology to be handled;
To meet the predetermined binaryzation difference image that is communicated with rule and be communicated with processing, obtain connected region;
Judge whether each connected region is the noise range, if then deletion;
The area image that to be made up of all connected regions that stay at last is as the human region image, and exports described human region image.
4. as claim 2 or 3 described methods, it is characterized in that, also comprise step: judge whether each pixel in the current real-time scene picture frame that obtains belongs to the pixel in the detected human region, if then background model remains unchanged, otherwise update background module.
5. as each described method in the claim 1 to 4, it is characterized in that described gesture comprises the hand shape of palm, described step C comprises:
Described human region image is carried out the palm target detection, and obtain the palm target area image;
Described palm target area image is carried out hand-shaped characteristic to be extracted;
Hand shape sorter according to the hand-shaped characteristic of the described palm that extracts and foundation in advance carries out the identification of hand shape, judges whether the hand shape of described palm is effective hand shape;
In step D, when the hand shape of judging this palm is effective hand shape, determine the control command of this effective hand shape correspondence according to the gesture database of setting up in advance; Or
Described gesture comprises the hand shape of palm and the movement locus of palm, and described step C comprises:
Described human region image is carried out the palm target detection, and obtain the palm target area image;
Described palm target area image is carried out hand-shaped characteristic to be extracted;
Carry out the identification of hand shape according to the hand-shaped characteristic of this palm that extracts and the hand shape sorter set up in advance, judge whether the hand shape of this palm is effective hand shape, when the hand shape of this palm of judgement is effective hand shape, this palm is designated current activation palm;
Detect the movement locus of current activation palm, determine the type of sports of current activation palm;
In step D, in the gesture database of setting up in advance, determine corresponding control command according to the type of sports of this effective hand shape and current activation palm;
In step e, switch corresponding video image or current video image displayed is operated according to described control command.
6. method as claimed in claim 5 is characterized in that, the step that described human region image is carried out the palm target detection comprises:
Described human region image section is carried out Face Detection, obtain and comprise people's face, arm or palm area image;
Obtain the area image of arm and/or palm according to the Face Detection model of setting up in advance;
In the area image of arm and/or palm, detect palm.
7. method as claimed in claim 6 is characterized in that, the described step that detects palm in the area image of arm and/or palm comprises:
Whether the length breadth ratio of area image of judging described arm and/or palm is greater than 2, if judge that then this zone is arm and palm area image, otherwise be the palm area image;
When being judged to be arm and palm area image, described arm and palm area image are carried out rim detection, obtain marginal information, obtain region contour;
Described region contour is carried out minimum external ellipse fitting, obtain the information of described external ellipse;
According to the information of described external ellipse, obtain the directional information of described region contour, thereby finally obtain the sensing of described arm and palm;
Described arm regions image and the palm area image that obtains sensing carried out the image rectification, make arm and palm be oriented to straight up;
On described arm after the rectification and palm area image, carry out the palm detection and localization, obtain the palm target area image.
8. video image display is characterized in that comprising:
Camera head is used to gather the preceding real-time scene image of display device;
Human body detection device is used for described real-time scene image is carried out human detection, obtains the human region image;
Hand gesture detecting device is used for detecting gesture at described human region image;
Control command is determined device, is used for determining the pairing control command of gesture;
Image display control apparatus is used for according to the demonstration of described control command control of video image on display device.
9. video image display as claimed in claim 8 is characterized in that, thereby described human body detection device is used for the current real-time scene picture frame that obtains is detected the human region image with comparing according to the reference picture of background model gained.
10. video image display as claimed in claim 8 or 9, it is characterized in that also comprising the context update device, it is used for judging whether current each pixel of real-time scene picture frame that obtains belongs to the pixel in the detected human region, if then background model remains unchanged, otherwise update background module.
11. as each described video image display in the claim 8 to 10, it is characterized in that described gesture comprises the hand shape of palm, described hand gesture detecting device comprises:
The palm detecting unit is used for described human region image is carried out the palm target detection, and obtains the palm target area image;
The hand-shaped characteristic extraction unit is used for that described palm target area image is carried out hand-shaped characteristic and extracts;
Hand shape recognition unit is used for carrying out the identification of hand shape according to the hand shape sorter of the hand-shaped characteristic of this palm that extracts and foundation in advance, judges whether the hand shape of this palm is effective hand shape;
Described control command determines that device when the hand shape of judging this palm is effective hand shape, determines the control command of this effective hand shape correspondence according to the gesture database of setting up in advance; Or
Described gesture comprises the hand shape of palm and the movement locus of palm, and described hand gesture detecting device comprises:
The palm detecting unit is used for described human region image is carried out the palm target detection, and obtains the palm target area image;
The hand-shaped characteristic extraction unit carries out hand-shaped characteristic to described palm target area image and extracts;
Hand shape recognition unit, hand shape sorter according to the hand-shaped characteristic of this palm that extracts and foundation in advance carries out the identification of hand shape, whether the hand shape of judging this palm is effective hand shape, when the hand shape of judging this palm is effective hand shape, this palm is designated current activation palm;
The palm tracking cell is used to detect the movement locus of current activation palm, determines the type of sports of current activation palm;
Described control command determines that device is used for the gesture database of setting up in advance, determines corresponding control command according to the type of sports of this effective hand shape and current activation palm;
Described image display control apparatus switches corresponding video image according to described control command or current video image displayed is operated.
CN 201010612804 2010-09-28 2010-12-29 Video image display control method and video image display device Expired - Fee Related CN102081918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010612804 CN102081918B (en) 2010-09-28 2010-12-29 Video image display control method and video image display device

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201010295067 2010-09-28
CN201010295067.4 2010-09-28
CN 201010612804 CN102081918B (en) 2010-09-28 2010-12-29 Video image display control method and video image display device

Publications (2)

Publication Number Publication Date
CN102081918A true CN102081918A (en) 2011-06-01
CN102081918B CN102081918B (en) 2013-02-20

Family

ID=44087844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010612804 Expired - Fee Related CN102081918B (en) 2010-09-28 2010-12-29 Video image display control method and video image display device

Country Status (1)

Country Link
CN (1) CN102081918B (en)

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102426480A (en) * 2011-11-03 2012-04-25 康佳集团股份有限公司 Man-machine interactive system and real-time gesture tracking processing method for same
CN102436301A (en) * 2011-08-20 2012-05-02 Tcl集团股份有限公司 Human-machine interaction method and system based on reference region and time domain information
CN102509088A (en) * 2011-11-28 2012-06-20 Tcl集团股份有限公司 Hand motion detecting method, hand motion detecting device and human-computer interaction system
CN102509079A (en) * 2011-11-04 2012-06-20 康佳集团股份有限公司 Real-time gesture tracking method and tracking system
CN102831407A (en) * 2012-08-22 2012-12-19 中科宇博(北京)文化有限公司 Method for realizing vision identification system of biomimetic mechanical dinosaur
CN102930270A (en) * 2012-09-19 2013-02-13 东莞中山大学研究院 Method and system for identifying hands based on complexion detection and background elimination
CN102981604A (en) * 2011-06-07 2013-03-20 索尼公司 Image processing apparatus, image processing method, and program
CN103034322A (en) * 2011-09-30 2013-04-10 德信互动科技(北京)有限公司 Man-machine interaction system and man-machine interaction method
CN103049084A (en) * 2012-12-18 2013-04-17 深圳国微技术有限公司 Electronic device and method for adjusting display direction according to face direction
CN103092332A (en) * 2011-11-08 2013-05-08 苏州中茵泰格科技有限公司 Digital image interactive method and system of television
CN103176667A (en) * 2013-02-27 2013-06-26 广东工业大学 Projection screen touch terminal device based on Android system
CN103246347A (en) * 2013-04-02 2013-08-14 百度在线网络技术(北京)有限公司 Control method, device and terminal
CN103428551A (en) * 2013-08-24 2013-12-04 渭南高新区金石为开咨询有限公司 Gesture remote control system
CN103442177A (en) * 2013-08-30 2013-12-11 程治永 PTZ video camera control system and method based on gesture identification
CN103474010A (en) * 2013-09-22 2013-12-25 广州中国科学院软件应用技术研究所 Video analysis-based intelligent playing method and device of outdoor advertisement
CN103576990A (en) * 2012-07-20 2014-02-12 中国航天科工集团第三研究院第八三五八研究所 Optical touch method based on single Gaussian model
CN103853462A (en) * 2012-12-05 2014-06-11 现代自动车株式会社 System and method for providing user interface using hand shape trace recognition in vehicle
CN103885587A (en) * 2014-02-21 2014-06-25 联想(北京)有限公司 Information processing method and electronic equipment
CN104050443A (en) * 2013-03-13 2014-09-17 英特尔公司 Gesture pre-processing of video stream using skintone detection
CN104683722A (en) * 2013-11-26 2015-06-03 精工爱普生株式会社 Image display apparatus and method of controlling image display apparatus
CN104798104A (en) * 2012-12-13 2015-07-22 英特尔公司 Gesture pre-processing of video stream using a markered region
CN104809387A (en) * 2015-03-12 2015-07-29 山东大学 Video image gesture recognition based non-contact unlocking method and device
CN105095882A (en) * 2015-08-24 2015-11-25 珠海格力电器股份有限公司 Method and apparatus for gesture identification
CN105825193A (en) * 2016-03-25 2016-08-03 乐视控股(北京)有限公司 Method and device for position location of center of palm, gesture recognition device and intelligent terminals
CN105930811A (en) * 2016-04-26 2016-09-07 济南梦田商贸有限责任公司 Palm texture feature detection method based on image processing
CN105980963A (en) * 2014-01-07 2016-09-28 汤姆逊许可公司 System and method for controlling playback of media using gestures
CN106022211A (en) * 2016-05-04 2016-10-12 北京航空航天大学 Method using gestures to control multimedia device
CN106197437A (en) * 2016-07-01 2016-12-07 蔡雄 A kind of Vehicular guidance system possessing Road Detection function
CN106227230A (en) * 2016-07-09 2016-12-14 东莞市华睿电子科技有限公司 A kind of unmanned aerial vehicle (UAV) control method
CN106886275A (en) * 2015-12-15 2017-06-23 比亚迪股份有限公司 The control method of car-mounted terminal, device and vehicle
WO2017129020A1 (en) * 2016-01-29 2017-08-03 中兴通讯股份有限公司 Human behaviour recognition method and apparatus in video, and computer storage medium
CN107390573A (en) * 2017-06-28 2017-11-24 长安大学 Intelligent wheelchair system and control method based on gesture control
WO2018113259A1 (en) * 2016-12-22 2018-06-28 深圳光启合众科技有限公司 Method and device for acquiring target object, and robot
CN108509853A (en) * 2018-03-05 2018-09-07 西南民族大学 A kind of gesture identification method based on camera visual information
CN108647564A (en) * 2018-03-28 2018-10-12 安徽工程大学 A kind of gesture recognition system and method based on casement window device
CN111580652A (en) * 2020-05-06 2020-08-25 Oppo广东移动通信有限公司 Control method and device for video playing, augmented reality equipment and storage medium
CN112016440A (en) * 2020-08-26 2020-12-01 杭州云栖智慧视通科技有限公司 Target pushing method based on multi-target tracking
CN113032605A (en) * 2019-12-25 2021-06-25 中移(成都)信息通信科技有限公司 Information display method, device and equipment and computer storage medium
CN113221892A (en) * 2021-05-12 2021-08-06 佛山育脉科技有限公司 Palm image determination method and device and computer readable storage medium
CN113807328A (en) * 2021-11-18 2021-12-17 济南和普威视光电技术有限公司 Target detection method, device and medium based on algorithm fusion
CN114153308A (en) * 2020-09-08 2022-03-08 阿里巴巴集团控股有限公司 Gesture control method and device, electronic equipment and computer readable medium
CN116030411A (en) * 2022-12-28 2023-04-28 宁波星巡智能科技有限公司 Human privacy shielding method, device and equipment based on gesture recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1276572A (en) * 1999-06-08 2000-12-13 松下电器产业株式会社 Hand shape and gesture identifying device, identifying method and medium for recording program contg. said method
CN1860429A (en) * 2003-09-30 2006-11-08 皇家飞利浦电子股份有限公司 Gesture to define location, size, and/or content of content window on a display
CN101332362A (en) * 2008-08-05 2008-12-31 北京中星微电子有限公司 Interactive delight system based on human posture recognition and implement method thereof
CN101605399A (en) * 2008-06-13 2009-12-16 英华达(上海)电子有限公司 A kind of portable terminal and method that realizes Sign Language Recognition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1276572A (en) * 1999-06-08 2000-12-13 松下电器产业株式会社 Hand shape and gesture identifying device, identifying method and medium for recording program contg. said method
CN1860429A (en) * 2003-09-30 2006-11-08 皇家飞利浦电子股份有限公司 Gesture to define location, size, and/or content of content window on a display
CN101605399A (en) * 2008-06-13 2009-12-16 英华达(上海)电子有限公司 A kind of portable terminal and method that realizes Sign Language Recognition
CN101332362A (en) * 2008-08-05 2008-12-31 北京中星微电子有限公司 Interactive delight system based on human posture recognition and implement method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《手势驱动编钟演奏技术的研究与系统实现》 20070815 胡文娟 手势驱动编钟演奏技术的研究与系统实现 , *

Cited By (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102981604B (en) * 2011-06-07 2016-12-14 索尼公司 Image processing equipment and image processing method
US9916012B2 (en) 2011-06-07 2018-03-13 Sony Corporation Image processing apparatus, image processing method, and program
CN102981604A (en) * 2011-06-07 2013-03-20 索尼公司 Image processing apparatus, image processing method, and program
US9785245B2 (en) 2011-06-07 2017-10-10 Sony Corporation Image processing apparatus, image processing method, and program for recognizing a gesture
CN102436301B (en) * 2011-08-20 2015-04-15 Tcl集团股份有限公司 Human-machine interaction method and system based on reference region and time domain information
CN102436301A (en) * 2011-08-20 2012-05-02 Tcl集团股份有限公司 Human-machine interaction method and system based on reference region and time domain information
CN103034322A (en) * 2011-09-30 2013-04-10 德信互动科技(北京)有限公司 Man-machine interaction system and man-machine interaction method
CN102426480A (en) * 2011-11-03 2012-04-25 康佳集团股份有限公司 Man-machine interactive system and real-time gesture tracking processing method for same
CN102509079A (en) * 2011-11-04 2012-06-20 康佳集团股份有限公司 Real-time gesture tracking method and tracking system
CN103092332A (en) * 2011-11-08 2013-05-08 苏州中茵泰格科技有限公司 Digital image interactive method and system of television
CN102509088A (en) * 2011-11-28 2012-06-20 Tcl集团股份有限公司 Hand motion detecting method, hand motion detecting device and human-computer interaction system
CN102509088B (en) * 2011-11-28 2014-01-08 Tcl集团股份有限公司 Hand motion detecting method, hand motion detecting device and human-computer interaction system
CN103576990A (en) * 2012-07-20 2014-02-12 中国航天科工集团第三研究院第八三五八研究所 Optical touch method based on single Gaussian model
CN102831407A (en) * 2012-08-22 2012-12-19 中科宇博(北京)文化有限公司 Method for realizing vision identification system of biomimetic mechanical dinosaur
CN102831407B (en) * 2012-08-22 2014-10-29 中科宇博(北京)文化有限公司 Method for realizing vision identification system of biomimetic mechanical dinosaur
CN102930270A (en) * 2012-09-19 2013-02-13 东莞中山大学研究院 Method and system for identifying hands based on complexion detection and background elimination
CN103853462A (en) * 2012-12-05 2014-06-11 现代自动车株式会社 System and method for providing user interface using hand shape trace recognition in vehicle
US9720507B2 (en) 2012-12-13 2017-08-01 Intel Corporation Gesture pre-processing of video stream using a markered region
US10261596B2 (en) 2012-12-13 2019-04-16 Intel Corporation Gesture pre-processing of video stream using a markered region
CN107272883A (en) * 2012-12-13 2017-10-20 英特尔公司 The gesture of video flowing is pre-processed using marked region
CN104798104A (en) * 2012-12-13 2015-07-22 英特尔公司 Gesture pre-processing of video stream using a markered region
US10146322B2 (en) 2012-12-13 2018-12-04 Intel Corporation Gesture pre-processing of video stream using a markered region
CN103049084A (en) * 2012-12-18 2013-04-17 深圳国微技术有限公司 Electronic device and method for adjusting display direction according to face direction
CN103049084B (en) * 2012-12-18 2016-01-27 深圳国微技术有限公司 A kind of electronic equipment and method thereof that can adjust display direction according to face direction
CN103176667A (en) * 2013-02-27 2013-06-26 广东工业大学 Projection screen touch terminal device based on Android system
CN104050443A (en) * 2013-03-13 2014-09-17 英特尔公司 Gesture pre-processing of video stream using skintone detection
CN104050443B (en) * 2013-03-13 2018-10-12 英特尔公司 It is pre-processed using the posture of the video flowing of Face Detection
CN103246347A (en) * 2013-04-02 2013-08-14 百度在线网络技术(北京)有限公司 Control method, device and terminal
CN103428551A (en) * 2013-08-24 2013-12-04 渭南高新区金石为开咨询有限公司 Gesture remote control system
CN103442177A (en) * 2013-08-30 2013-12-11 程治永 PTZ video camera control system and method based on gesture identification
CN103474010A (en) * 2013-09-22 2013-12-25 广州中国科学院软件应用技术研究所 Video analysis-based intelligent playing method and device of outdoor advertisement
CN104683722A (en) * 2013-11-26 2015-06-03 精工爱普生株式会社 Image display apparatus and method of controlling image display apparatus
CN104683722B (en) * 2013-11-26 2019-07-12 精工爱普生株式会社 Image display device and its control method
CN105980963A (en) * 2014-01-07 2016-09-28 汤姆逊许可公司 System and method for controlling playback of media using gestures
CN103885587A (en) * 2014-02-21 2014-06-25 联想(北京)有限公司 Information processing method and electronic equipment
CN104809387A (en) * 2015-03-12 2015-07-29 山东大学 Video image gesture recognition based non-contact unlocking method and device
CN104809387B (en) * 2015-03-12 2017-08-29 山东大学 Contactless unlocking method and device based on video image gesture identification
CN105095882A (en) * 2015-08-24 2015-11-25 珠海格力电器股份有限公司 Method and apparatus for gesture identification
CN105095882B (en) * 2015-08-24 2019-03-19 珠海格力电器股份有限公司 The recognition methods of gesture identification and device
CN106886275A (en) * 2015-12-15 2017-06-23 比亚迪股份有限公司 The control method of car-mounted terminal, device and vehicle
CN106886275B (en) * 2015-12-15 2020-03-20 比亚迪股份有限公司 Control method and device of vehicle-mounted terminal and vehicle
WO2017129020A1 (en) * 2016-01-29 2017-08-03 中兴通讯股份有限公司 Human behaviour recognition method and apparatus in video, and computer storage medium
CN105825193A (en) * 2016-03-25 2016-08-03 乐视控股(北京)有限公司 Method and device for position location of center of palm, gesture recognition device and intelligent terminals
CN105930811A (en) * 2016-04-26 2016-09-07 济南梦田商贸有限责任公司 Palm texture feature detection method based on image processing
CN106022211A (en) * 2016-05-04 2016-10-12 北京航空航天大学 Method using gestures to control multimedia device
CN106022211B (en) * 2016-05-04 2019-06-28 北京航空航天大学 A method of utilizing gesture control multimedia equipment
CN106197437A (en) * 2016-07-01 2016-12-07 蔡雄 A kind of Vehicular guidance system possessing Road Detection function
CN106227230A (en) * 2016-07-09 2016-12-14 东莞市华睿电子科技有限公司 A kind of unmanned aerial vehicle (UAV) control method
CN108230328A (en) * 2016-12-22 2018-06-29 深圳光启合众科技有限公司 Obtain the method, apparatus and robot of target object
KR102293163B1 (en) 2016-12-22 2021-08-23 선전 쾅-츠 허종 테크놀로지 엘티디. How to acquire a target, devices and robots
CN108230328B (en) * 2016-12-22 2021-10-22 新沂阿凡达智能科技有限公司 Method and device for acquiring target object and robot
WO2018113259A1 (en) * 2016-12-22 2018-06-28 深圳光启合众科技有限公司 Method and device for acquiring target object, and robot
KR20190099259A (en) * 2016-12-22 2019-08-26 선전 쾅-츠 허종 테크놀로지 엘티디. How to get the target, device and robot
US11127151B2 (en) 2016-12-22 2021-09-21 Shen Zhen Kuang-Chi Hezhong Technology Ltd Method and device for acquiring target object, and robot
CN107390573B (en) * 2017-06-28 2020-05-29 长安大学 Intelligent wheelchair system based on gesture control and control method
CN107390573A (en) * 2017-06-28 2017-11-24 长安大学 Intelligent wheelchair system and control method based on gesture control
CN108509853A (en) * 2018-03-05 2018-09-07 西南民族大学 A kind of gesture identification method based on camera visual information
CN108647564A (en) * 2018-03-28 2018-10-12 安徽工程大学 A kind of gesture recognition system and method based on casement window device
CN113032605B (en) * 2019-12-25 2023-08-18 中移(成都)信息通信科技有限公司 Information display method, device, equipment and computer storage medium
CN113032605A (en) * 2019-12-25 2021-06-25 中移(成都)信息通信科技有限公司 Information display method, device and equipment and computer storage medium
CN111580652A (en) * 2020-05-06 2020-08-25 Oppo广东移动通信有限公司 Control method and device for video playing, augmented reality equipment and storage medium
CN112016440A (en) * 2020-08-26 2020-12-01 杭州云栖智慧视通科技有限公司 Target pushing method based on multi-target tracking
CN112016440B (en) * 2020-08-26 2024-02-20 杭州云栖智慧视通科技有限公司 Target pushing method based on multi-target tracking
CN114153308A (en) * 2020-09-08 2022-03-08 阿里巴巴集团控股有限公司 Gesture control method and device, electronic equipment and computer readable medium
CN114153308B (en) * 2020-09-08 2023-11-21 阿里巴巴集团控股有限公司 Gesture control method, gesture control device, electronic equipment and computer readable medium
CN113221892A (en) * 2021-05-12 2021-08-06 佛山育脉科技有限公司 Palm image determination method and device and computer readable storage medium
CN113807328B (en) * 2021-11-18 2022-03-18 济南和普威视光电技术有限公司 Target detection method, device and medium based on algorithm fusion
CN113807328A (en) * 2021-11-18 2021-12-17 济南和普威视光电技术有限公司 Target detection method, device and medium based on algorithm fusion
CN116030411B (en) * 2022-12-28 2023-08-18 宁波星巡智能科技有限公司 Human privacy shielding method, device and equipment based on gesture recognition
CN116030411A (en) * 2022-12-28 2023-04-28 宁波星巡智能科技有限公司 Human privacy shielding method, device and equipment based on gesture recognition

Also Published As

Publication number Publication date
CN102081918B (en) 2013-02-20

Similar Documents

Publication Publication Date Title
CN102081918B (en) Video image display control method and video image display device
Biswas et al. Gesture recognition using microsoft kinect®
CN100393106C (en) Method and apparatus for detecting and/or tracking image or color area of image sequence
CN102270348B (en) Method for tracking deformable hand gesture based on video streaming
US5774591A (en) Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images
KR100612858B1 (en) Method and apparatus for tracking human using robot
CN102298709A (en) Energy-saving intelligent identification digital signage fused with multiple characteristics in complicated environment
CN103353935A (en) 3D dynamic gesture identification method for intelligent home system
CN102402680A (en) Hand and indication point positioning method and gesture confirming method in man-machine interactive system
CN101470809A (en) Moving object detection method based on expansion mixed gauss model
CN108197534A (en) A kind of head part's attitude detecting method, electronic equipment and storage medium
CN110235169A (en) Evaluation system of making up and its method of operating
US20090033622A1 (en) Smartscope/smartshelf
CN110032932B (en) Human body posture identification method based on video processing and decision tree set threshold
CN105912126B (en) A kind of gesture motion is mapped to the adaptive adjusting gain method at interface
CN113378641B (en) Gesture recognition method based on deep neural network and attention mechanism
CN103793056A (en) Mid-air gesture roaming control method based on distance vector
Schiele Model-free tracking of cars and people based on color regions
CN101398896B (en) Device and method for extracting color characteristic with strong discernment for image forming apparatus
CN103150552A (en) Driving training management method based on people counting
CN103049748A (en) Behavior-monitoring method and behavior-monitoring system
CN102073878A (en) Non-wearable finger pointing gesture visual identification method
CN111967324A (en) Dressing system with intelligent identification function and identification method thereof
CN106796649A (en) Use the man-machine interface based on attitude of label
Manresa-Yee et al. Towards hands-free interfaces based on real-time robust facial gesture recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SHENZHEN RUIGONG TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: SHENZHEN GRADUATE SCHOOL OF PEKING UNIVERSITY

Effective date: 20150624

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20150624

Address after: 518000 Guangdong city of Shenzhen province Nanshan District high in the four No. 31 EVOC technology building 17B1

Patentee after: Shenzhen Rui Technology Co., Ltd.

Address before: 518055 Guangdong city in Shenzhen Province, Nanshan District City Xili Shenzhen University North Campus

Patentee before: Shenzhen Graduate School of Peking University

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130220

Termination date: 20171229