A kind of input information acquisition method, device and the electronic equipment of interactive system
Technical field
This specification is related to the input information acquisition method of field of computer technology more particularly to a kind of interactive system, dress
It sets and electronic equipment.
Background technique
In existing interaction scenarios, after getting the image of interaction area, image is usually sent directly to rear end
Interactive system carries out image procossing by interactive system, obtains effective interactive information, and then interactive system is further according to interactive information
It is fed back accordingly, to realize interaction.
However, not only interactive system is needed to undertake image processing work in such interactive process, also for allowing interaction
System can handle and respond in time the operation of user, guarantee that interaction is gone on smoothly, it is desirable that and interactive system is in real-time running state,
This just needs to consume vast resources, also, in application scenes, and such interactive process influences whether the smoothness of interaction
Property, for example, fail to identify in time user's admission then can not quick start interaction process, and then influence the usage experience of user.
Summary of the invention
In view of this, this specification embodiment provides input information acquisition method, device and the electricity of a kind of interactive system
Sub- equipment.
This specification embodiment adopts the following technical solutions:
This specification embodiment provides a kind of input information acquisition method of interactive system, comprising:
The target image of interaction area is obtained by image acquisition units, the interaction area is to first pass through calibration object mark in advance
Determined the region of depth information, the target image includes the depth image comprising the depth information;
By image processing unit according to the background image defaulted in described image processing unit, it is based on the depth map
It whether include foreground image as detecting the target image, the background image is the background image of the interaction area;
When the target image includes the foreground image, by described image processing unit to the target image
It is handled, and generates the input information of the interactive system.
This specification embodiment also provides a kind of input information acquisition device of interactive system, comprising:
The target image of interaction area is obtained by image acquisition units, the interaction area is to first pass through calibration object mark in advance
Determined the region of depth information, the target image includes the depth image comprising the depth information;
By image processing unit according to the background image defaulted in described image processing unit, it is based on the depth map
It whether include foreground image as detecting the target image, the background image is the background image of the interaction area;
When the target image includes the foreground image, by described image processing unit to the target image
It is handled, and generates the input information of the interactive system.
This specification embodiment also provides a kind of for obtaining the electronic equipment of the input information of interactive system, comprising: extremely
A few processor and memory, the memory are stored with program, and be configured to be executed by least one processor with
Lower step:
The target image of interaction area is obtained by image acquisition units, the interaction area is to first pass through calibration object mark in advance
Determined the region of depth information, the target image includes the depth image comprising the depth information;
By image processing unit according to the background image defaulted in described image processing unit, it is based on the depth map
It whether include foreground image as detecting the target image, the background image is the background image of the interaction area;
When the target image includes the foreground image, by described image processing unit to the target image
It is handled, and generates the input information of the interactive system.
At least one above-mentioned technical solution that this specification embodiment uses can reach following the utility model has the advantages that passing through image
Acquisition unit obtains the target image of interaction area, and target image includes the depth image for including depth information;Pass through setting
Image processing unit between image acquisition units and interactive system, according to the Background defaulted in image processing unit
Picture, whether detection target image includes foreground image, wherein background image is the background image of interaction area, interaction area
For the region for first passing through the calibration calibrated depth information of object in advance, it is based on depth information in this way, it can be using background image to target figure
As being detected, so that can directly handle target image in treatment process, calculation amount, also, the back is effectively reduced
Scape image is the image obtained according to calibrated interaction area, efficiently utilizes calibrated location information, improves input
The accuracy of information guarantees interactive fluency;When target image includes foreground image, by image processing unit to mesh
Logo image is handled, and generates the input information of interactive system, since image processing unit is independently of image acquisition units
With interactive system setting, the acquisition for inputting information can be completed independently in image processing unit, and then interactive system is only
It needs to interact after receiving the input information from image processing unit, is not necessarily to real time execution, can economize on resources.
Detailed description of the invention
In order to illustrate more clearly of this specification embodiment or technical solution in the prior art, below will to embodiment or
Attached drawing needed to be used in the description of the prior art is briefly described, it should be apparent that, the accompanying drawings in the following description is only
The some embodiments recorded in this specification, for those of ordinary skill in the art, in not making the creative labor property
Under the premise of, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of application schematic diagram of the input information acquisition method for interactive system that this specification embodiment provides.
Fig. 2 is a kind of flow chart of the input information acquisition method for interactive system that this specification embodiment provides.
Fig. 3 is a kind of input information acquisition method for interactive system that this specification embodiment provides in human-computer interaction application
The schematic diagram of the functional block diagram of application system in scene.
Fig. 4 is a kind of input information acquisition method for interactive system that this specification embodiment provides in human-computer interaction application
The flow chart of application system in scene.
Fig. 5 is a kind of structural schematic diagram of the input information acquisition device for interactive system that this specification embodiment provides.
Specific embodiment
In order to make those skilled in the art more fully understand the technical solution in this specification, below in conjunction with this explanation
Attached drawing in book embodiment is clearly and completely described the technical solution in this specification embodiment, it is clear that described
Embodiment be merely a part but not all of the embodiments of the present application.Based on this specification embodiment, this field
Those of ordinary skill's every other embodiment obtained without creative efforts, all should belong to the application
The range of protection.
At present in interaction scenarios, when user starts to interact, interactive system is with regard to according to the interaction area got
Picture carries out the extraction process of interactive information, and is responded according to the interactive information extracted, this just needs interactive system one
It is directly in real-time running state, and such interactive process, interactive system need to handle mass data, and are constantly in real-time fortune
Row state can also reduce interactive fluency to resource consumption height, also, in application scenes.
This specification embodiment provides input information acquisition method, device and the electronic equipment of a kind of interactive system, application
Schematic diagram can be as shown in Figure 1, be provided with image acquisition units 10, image procossing in the interactive device for being used to interact with user
Unit (not identified in figure) and interactive system 20, wherein image acquisition units 10 are used to acquire the image of interaction area, figure
As processing unit is for handling acquired image, and input information needed for generating interactive system 20, it interacts in this way
System 20 is just responded according to input information, realizes the interaction with user.In specific implementation, image acquisition units 10 can be selected
With RGBD camera, interactive system 20 be can be set in the interactive device of interaction, and interactive device may include that interaction is shielded, mutually
The equipment such as dynamic shelf, interaction vending machine, interdynamic dining table, server, image processing unit, which can be, to be arranged in interactive device
Feature board (such as embedded board), is also possible to the feature board being set together with RGBD camera, and wherein RGBD is imaged
Head may be provided at interaction screen top (as shown in the figure) for obtaining RGB image and depth image.
Specifically, the input information acquisition method for the interactive system that this specification embodiment provides, Integral Thought are to lead to
The target image that image acquisition units 10 obtain interaction area is crossed, which includes depth image including depth information,
By image processing unit according to preset background image, based on the depth image in target image, it is to detect target image
No includes foreground image, and wherein background image is the background image of the interaction area, and interaction area is to first pass through calibration in advance
The region of the calibrated depth information of object, in this way when target image includes foreground image, just by image processing unit to mesh
Logo image is handled, the input information of Lai Shengcheng interactive system 20.
In this specification embodiment, image processing unit, can in application environment functionally independently of interactive system 20
It is set between image acquisition units 10 and interactive system 20, it can be independently to target figure accessed by image acquisition units 10
As carrying out image procossing, and the input information according to needed for processing result image generation interactive system 20, so that interaction uses, this
Sampled images processing unit functionally, is independently of interactive system 20, and for the image procossing in interaction, that is to say, that
The process of image procossing be it is independently operated on image processing unit, image procossing mistake can be participated in without interactive system 20
Journey, so that interactive system 20 is not necessarily to be constantly in operating status, and need to only start operation when needing to interact response can.
In this specification embodiment, based on the depth image for including depth information, image procossing list both can be directly utilized
Member carries out image procossing, and then promotes the fluency of interaction, and can be improved the identification accuracy in interaction, can avoid because clapping
It takes the photograph environmental change and identifies inaccurate, that is to say, that based on the depth image in target image, can directly pass through image procossing list
Target image is compared by member with background image, effectively reduces the calculation amount in treatment process, so can rapidly,
Accurately determine out whether target image includes foreground image, and when including foreground image, figure is carried out to target image
As processing, input information needed for finally generating interactive system 20 according to processing result image.
Above-mentioned application scenarios are merely for convenience of understanding the application and showing, and the embodiment of this specification is in this regard not
It is restricted.On the contrary, the embodiment of this specification can be applied to applicable any scene.
Hereinafter, the input information acquisition method for the interactive system that this specification embodiment is provided referring to attached drawing, device and
Electronic equipment is illustrated.
Embodiment 1
Fig. 2 is a kind of flow chart of the input information acquisition method for interactive system that this specification embodiment provides.
As shown in Fig. 2, the input information acquisition method of the interactive system, it may include following steps:
Step S201 obtains the target image of interaction area by image acquisition units.
Wherein, the interaction area is to first pass through the region of the calibration calibrated depth information of object, the target image packet in advance
Include the depth image comprising the depth information.
It should be noted that depth image is also referred to as range image, refer to (will compare from the collector of image acquisition units
Such as imaging sensor) image of the distance (also referred to as depth) as pixel value of each point into interaction area, at this moment depth image can
Similar to gray level image, only each pixel value in depth image is the spatial position distance acquisition of the object in interaction area
The actual range of device.
In this way, including depth image in target image, can directly be reflected in interaction area by depth image
The spatial positional information of the geometry of the visible surface of each object can be convenient in subsequent processing, based in target image
Depth image to carry out image procossing to target image.
In specific implementation, image acquisition units can carry out site layout project according to actual needs, and then can postpone in cloth, in advance
The depth information of the pickup area (i.e. interaction area) of image acquisition units is demarcated by demarcating object, in this way in interactive areas
After the calibrated depth information in domain, the image of image acquisition units acquisition has calibrated depth information, thus subsequent place
Image procossing can be carried out based on the depth image in target image in reason, both facilitate subsequent image processing, acquisition can also be reduced
The variation of environment influences the image procossing bring in interaction, such as the variation of acquisition light or the slightly change of acquisition angles
It is dynamic, after based on the depth image for including calibrated depth information, it can effectively reduce acquisition environment and image procossing (for example known
Not, detect) influence.
In some embodiments, image acquisition units can select RGBD camera, in this way by RGBD camera, both
Depth image including depth information can be collected, and the RGB image comprising rich colors can be collected.
In some embodiments, image acquisition units can select the binocular camera for capableing of sampling depth information, than
Binocular camera such as with chip-scale processing depth information.
In some embodiments, target image can be obtained by image acquisition units by scheduled time interval, such as
The image of the interaction area of acquisition in 1 second, as target image.In addition, target image can also be collected from image acquisition units
Video flowing in obtain, which is not described herein again.
In some embodiments, it for demarcating the calibration object of interaction area, can be used with specific shape, pattern and energy
For demarcating the scaling board of purposes, such as gridiron pattern scaling board, the gridiron pattern of black and white is provided on scaling board.
In some embodiments, several calibration objects can be also set in interaction area, and such image acquisition units are just adopted
Collection includes the image for demarcating the interaction area of object, and then by the range information of calibration object and image acquisition units, to friendship
The depth information in mutual region is demarcated, and is demarcated to realize to the spatial position of interaction area, after calibration, is collected
Depth image in each pixel value, it will be able to characterize the depth information in interaction area, for example, pixel value with
Space coordinate is corresponding.
For example, rectangular surface is provided with tessellated scaling board, it is placed in the interaction area of interaction screen surrounding, this
When be arranged in interaction screen at the top of RGBD camera, just to be provided with scaling board interaction area progress depth image image adopt
Collection is finally realized and is corresponded to interaction screen to just get scaling board to the range information of RGBD camera by depth image
Interaction area demarcated.
It should be noted that one piece of scaling board can be used in being demarcated, by the angle and the position that change scaling board
It sets, obtains multiple uncalibrated images and demarcated;Can also use muti-piece scaling board, the different location being placed in interaction area,
Multiple uncalibrated images are obtained to be demarcated.
Step S203, whether include foreground image, and work as target figure if being detected in target image by image processing unit
As executing step S205 when including foreground image.
In specific implementation, image processing unit can be based on the depth image in target image, then according to preset background
Whether image in target image includes foreground image to detect, that is to say, that can be led to based on the depth information of depth image
Image processing unit is crossed to compare background image and target image, can rapidly, accurately detect whether target image wraps
Contain foreground image.
Wherein, when scheduled foreground object not being appeared in interaction area, as the background of interaction area, then pass through figure
As acquisition unit obtains the background of interaction area at this time as background image.Certainly, the Background in this specification embodiment
Picture is to be obtained based on calibrated interaction area, thus also includes depth information, that is to say, that also wrapped in background image
Contain depth image.
In addition, foreground image is interactive foreground object possible corresponding image in interaction, foreground object can be according to reality
Border interactive application scene predefines, for example needs to detect whether application scenarios of the someone in interaction area, and foreground object is just
People, foreground image just correspond to may be people image, for another example need to detect whether motor vehicle answering in interaction area
With scene, foreground object is exactly motor vehicle, foreground image just correspond to may be motor vehicle image.
It should be noted that before detection scape image when, background modeling, frame difference method, the sport foregrounds such as optical flow method can be used
Detection method.
In some embodiments, image processing unit can select the embedded board for image procossing, such as
GPU (Graphics Processing Unit, graphics processor) development board of NVIDIA (tall and handsome to reach), the GPU development board are gathered around
There is advanced embedded vision computing system, it is possible to provide high performance image-capable and easily controllable interface in this way may be used
The GPU development board is easily utilized, interactive environment is set up out together with image acquisition units and interactive system, and utilize the GPU
Development board to carry out image acquisition units collected target image image procossing, and is generated and handed over according to processing result image
Input information needed for mutual system.In this way, image processing unit not only can be functionally arranged independently of interactive system, but also can root
According to actual needs, to establish out the high interactive application environment of at low cost, system structure simple and flexible, performance.
In some embodiments, background modeling can be carried out according to the background image defaulted in image processing unit, come
The corresponding background model of obtained interaction area, for example, passing through image when the foreground object for not having to interact in interaction area
Acquisition unit acquires the image in interaction area, and then is got in background image based on the depth image in acquired image
Then the depth information of pixel is converted to the location coordinate information of solid space, finally by the depth information of each pixel
Establish out the background model of the space background of interaction area.Such subsequent processes, so that it may be carried out based on the background model
Processing.
It should be noted that when carrying out background modeling based on depth image, it can according to actual needs, using building accordingly
Mould mode, for example, modeled based on single-point Gauss model, the background based on statistics is modeled, is initialized based on image sequence
It modeled, modeled based on mixed Gauss model, modeled etc. based on code book, here not reinflated explanation.
Since the detection of background modeling, target image is all based on depth image completion, image detection can be improved
Accuracy, be also convenient for the subsequent identification to foreground object.
Further, before detection scape image when, can be determined before whether there is in target image based on the background model of foundation
Scape image, that is to say, that target image and background model can be compared based on the depth information that depth image is included,
So as to compare the depth information of the two corresponding position one by one, and it is more than that those of predetermined threshold is deep that difference and difference, which will be present,
The region that degree information is constituted is considered as sport foreground region, so that it is determined that whether the image in sport foreground region belongs to foreground picture out
Picture.
For example, the target image got is just including interactive object when there is interactive object to enter interaction area
The image of interaction area, at this point, target image, compared with background image, had difference is since interactive object enters interactive areas
Caused by domain, i.e., the difference of two kinds images is the corresponding image of interactive object, and then determines target image according to difference
It whether include foreground image.
Step S205 is handled target image by image processing unit, and generates the input information of interactive system.
In this specification embodiment, image procossing is carried out to target image, including to the foreground image in target image
Analysis processing, and obtain the related image information of interaction based on the analysis results etc., in this way by image processing unit to prospect
Image carries out identifying processing, so as to information needed for obtaining out with intercorrelation, interactive system, and these information is generated
The input information of interactive system.
For example, it may be being identified to sport foreground, before determining whether the sport foreground in target image belongs to
Scape image;It is also possible to identify the quantity of foreground image, to determine the quantity of the corresponding foreground object of foreground image;
It can also be and the particular content of foreground image is identified, the movement to determine corresponding foreground object in foreground image is special
Sign.
In some embodiments, can also be according to the needs of application scenarios, slowly, automatically iteration updates Background
Picture.Specifically, the method also includes: when target image does not include foreground image, by preset update policy update carry on the back
Scape image.
In this way, when target image does not include foreground image, it can be by the collected target image of image acquisition units
It is just new background image, slowly iteration updates background image, and background image in time, is accurately reflected not
Include the background of the interaction area of foreground object, is further reduced calculation amount when detection target image.
In specific implementation, preset more new strategy may include the strategy temporally updated, such as according to the scheduled time
Interval is updated background model, such as updates a background image per minute;It can also include the strategy updated by threshold value, than
Such as target image is compared with original background image, when the difference of the two is greater than scheduled threshold value, and target image is not
It include foreground object, it at this moment can be using target image as new background image.
According to above-mentioned steps S201~S205, by the way that image processing unit functionally independently of interactive system, that is, is existed
Add the image processing unit between image acquisition units and interactive system, and by the background image of calibrated interaction area
It is set in the image processing unit, and then the depth image of the interaction area of calibrated depth information can be effectively utilized,
When image acquisition units newly collect target image, the depth image being based in target image, by image processing unit root
According to background image, quickly and accurately to detect whether target image includes foreground image, and when there is foreground image, then
By image processing unit processing target image, and according to processing result image, the input information of interactive system is generated.On the one hand,
Background image comes from the interaction area of calibrated depth information, depth image is based in this way, using background image to mesh
Logo image is detected, and the input information of interactive system is regenerated, and can be improved the accuracy of input information and be improved image
Treatment effeciency guarantees interactive fluency;In another aspect, being based on depth image, image processing unit can be directly to target image
It is handled, calculation amount is effectively reduced, participated in image procossing without interactive system, there are no need interactive system to be constantly in
Real-time running state can save vast resources.
Embodiment 2
A kind of input information acquisition method of interactive system provided in this embodiment, be on the basis of embodiment 1, for
When foreground object (i.e. interactive object) is the mankind, the input information of interactive system is obtained.Therefore, for related in embodiment 1
And content, by schematic illustration.
Specifically, foreground image includes human body image, at this time step S203, i.e. whether detection target image includes prospect
The step of image can include: whether detect in the target image includes target body image.
In this way, when in the target image including target body image, it can be according to difference of the people in interaction area
At this moment activity condition can be detected in interactive areas there are when people in step S203, be walked to establish out different interactive application scenes
Relevant treatment process just can be performed in rapid S205.
In some embodiments, it is desirable to come according to whether there is mankind's activity, such as entrance automatically-controlled door in interaction area
Interactive process is established, whether can be in this way the mankind by detection foreground object, the method can be used to determine.
Specifically, when in the target image including target body image, step S205 is that is, described to pass through the figure
As processing unit handles the target image, and the step of generating the input information of the interactive system can include: it is logical
Described image processing unit is crossed to be occurred according to the target body image and/or disappeared in the target image, it is corresponding to generate
First input information of the interactive system, the first input information is for characterizing using there is and/or disappear in the mesh
The target body image in logo image, the information interacted with the interactive system.In this way, by detecting whether people goes out
Show and/or leave interaction area, the information marched into the arena and/or left the theatre needed for Lai Shengcheng interactive system.
For example, when people enters in interaction area, the testing result of step S203 will in the interactive application of automatically-controlled door
To include human body image, at this point, step S205 just passes through image processing unit, to generate entering for user according to the testing result
Input information of the field information as interactive system;And when the people in interaction area leaves, the testing result of step S203 will be
Not comprising there is human body image, at this point, step S205 just passes through image processing unit, according to the testing result generate user from
Input information of the field information as interactive system.
By enabling subsequent interactive system to start or stop interaction in time to interactive object admission and the judgement left the theatre
The operation of function avoids the unnecessary wasting of resources while guaranteeing that interaction is gone on smoothly.
It in some embodiments, can also be when determining foreground object be people, further by foreground image from target image
In split, in order to be further processed to foreground image, input information needed for Lai Shengcheng interactive system.
In a kind of application example, on the basis of in judging target image including target body image, in order into
One step obtains effective interactive information, can also be by image processing unit according to preset human body segmentation's strategy, by target person
Body image is split from target image, that is, obtains out target body image, the target body figure then obtained according to segmentation
Picture, goes the second input information needed for generating interactive system, and the second input information is used to characterize the institute obtained using segmentation
Target body image is stated, the information interacted with the interactive system, such as the second input information include the quantity of people, enter
Time, time departure etc..
Wherein, human body segmentation's strategy is used for the strategy for splitting human body image from target image, may include pre-
It also may include preset manikin if characteristics of human body.
In this way, when detect in target image include target body image after, based on the depth information of image to target
Image carries out connected component analysis, and the connected region analyzed is compared with preset characteristics of human body and/or manikin,
It identifies human region, realizes the segmentation of human body image, and then obtain human body image.Wherein, connected component analysis includes by people
Each connected region in body image is found out and is marked, connected region be be located at same depth value within the scope of and position it is adjacent
Pixel composition image-region.
For example, the number that can generate interactive object according to the quantity of the human body image got is believed as the second input
Breath;The behavior of interactive object can also be generated according to the posture of the human body image got as the second input information, such as interaction
Object is to stand, squat down;Interaction corresponding types can also be generated according to the figure of the human body image got as the second input
Information, if interactive object is adult, children;It is opposite can also to generate interactive object according to the position of the human body image got
Position is as the second input information, as interactive object is in front-rear position, juxtaposed positions.
The interactive object in interaction area can be identified by getting human body image, when with multiple interactions pair
As when, can be good at carrying out independent identification for each interactive object, avoid the omission that information occurs in subsequent processing, shadow
Ring interaction.
It, can also be by image processing unit from people on the basis of getting human body image in a kind of application example
Hand images are identified in body image, input information needed for generating interactive system according to hand images, further rich interactive
Information.
Specifically, on the basis of the target body image obtained based on segmentation, the method, which may also include that, first passes through institute
The target body image that image processing unit is obtained according to segmentation is stated, the hand images of the target body image are obtained;
Then by described image processing unit according to the hand images, information, institute are inputted to generate the third of the interactive system
It states third input information and utilizes the hand images, the information interacted with the interactive system for characterizing.
In specific implementation, during identifying hand images, it can be based on the connected region analyzed, with preset hand
Portion's feature and/or hand pattern are compared, and identify hand region, meanwhile, by the opposite position of hand region and human region
It sets and is compared the position for judging the hand images, for example, it may be identifying that the corresponding hand of hand images belongs to left hand also
It is the right hand;It is also possible to identify that the corresponding hand of hand images is located above or below human body.
For example, then identifying the hand images of left hand when inputting information is to need to generate according to the specific action of left hand
Afterwards, third is generated according to the hand images of left hand input information;Match when being preset with hand images and human body relative position
Interbehavior when, according to the hand images identified interbehavior corresponding with the acquisition of the relative position of human body, and generate with
Match third input information.
Identification to hand images effectively improves so that can handle for the hand of specific position in treatment process
The validity of the input information of generation avoids interference of the redundant information to treatment process, while can reduce calculation amount, avoids transporting
The waste of row resource.
In a kind of application example, in order to enable the hand images identified have more information, so as to later use hand
Portion's image carries out more careful, accurate processing, can also obtain hand images by following steps: firstly, target image includes
RGB image and depth image, such as in step s 201 using the target image of RGBD camera shooting;Then, according to institute
The depth image in target image is stated, to determine the hand position of the target body image;It is last according to determining
Hand position obtains the hand images of the target body image from the corresponding RGB image.
In a kind of application example, on the basis of getting the hand images of target body image, figure can also be passed through
As processing unit carries out gesture identification to hand images, then according to gesture identification as a result, come needed for generating interactive system
Input information, further rich interactive information.
Specifically, after obtaining hand images, the method also includes: described image processing unit is first passed through according to default
Gesture identification strategy, to the hand images carry out gesture identification;Then known by described image processing unit according to gesture
Not as a result, generating the 4th input information of the interactive system, the 4th input information is known for characterizing using the gesture
Not as a result, the information interacted with the interactive system.
Wherein, gesture identification strategy may include presetting for identifying matched gesture according to hand images
Gesture feature, also may include preset gesture model.
Specifically, it can be based on the connected region analyzed, compared with preset hand-characteristic and/or hand pattern
Compared with, identify hand region, realize the segmentation of hand images, and then obtain hand images, then the hand images that will acquire with
Preset gesture feature and/or gesture model are compared, and identify corresponding gesture.
For example, when gesture identification result is preset gesture in specific interaction, interactive use is can be generated in such as " V " gesture
Family has executed the input information of prearranged gesture;When gesture identification result is the gesture of preset expression specific meanings, such as " ok "
Gesture can be generated interactive user and determine the input information for executing the interbehavior.
By identifying to gesture, interactive information can be intuitively obtained, identifies the intention of interactive object, meanwhile,
Hand images are obtained in the human body image got and carry out gesture identification, can identify the interaction letter of each interactive object respectively
Breath, and then efficiently solve the problems, such as more people's gesture identifications, guarantee the comprehensive of the interactive information got.
It should be noted that in this specification embodiment, before being identified to gesture, can also to hand images into
Row identification carries out gesture identification, at this point, not only for example, obtaining the hand images of left hand after identifying the hand images of left hand
Corresponding hand images can be needed to obtain according to interactive, and also include hand position letter in the result of gesture identification
Breath enables the input information being subsequently generated to include more extensive interactive information.
For example, being compared based on the connected region analyzed with preset hand-characteristic and/or hand pattern, identify
Out after hand region, hand images are obtained from region corresponding in RGB image, realize the segmentation of hand images, then be based on hand
The RGB information of pixel in portion's image, the hand images that will acquire are compared with preset gesture feature and/or gesture model
Compared with identifying corresponding gesture.
In a kind of application example, can also be when to gesture identification by carrying out the detection of hand bone to hand images,
The skeleton of hand is obtained, then by the shape discrimination of five finger skeletons in hand skeleton, realizes the identification of gesture.
In a kind of application example, hand images corresponding RGB information can be used when to gesture identification, gesture is known
Not, it can be identified in this way for the details of hand, the accuracy and true property of identification can be improved.
In a kind of application example, can also according to target body image be split processing, hand images identification and
Gesture identification etc., after integrating and obtain out processing result, input information needed for regenerating interactive system, such interactive system
It is more complete to input information, interactive system of being more convenient for uses.
Specifically, in the gesture by described image processing unit to the hand images of all target body images
After completing identification, the method may also include that through described image processing unit according to the comprehensive of each target body image
Information is closed, the 5th input information of the interactive system is generated, the 5th input information utilizes the target person for characterizing
The integrated information of body image, the information interacted with the interactive system, the integrated information is for characterizing the target person
Behavioural information of the corresponding target body of body image in interaction.
Wherein, integrated information may include the quantity of the corresponding people of target body image, body characteristics, motion characteristic (such as
Gesture) etc. can characterize behavioural information of the people in interaction, and these behavioural informations are required in response interact with interactive system
The corresponding information of input information.
Specifically, by image processing unit to whether including human body image, the human figure got in target image
Picture and gesture identification result are handled jointly, therefrom sort out effective interactive information, generate the input letter of interactive system
Breath.
For example, having detected two people of A, B in interaction area, the corresponding human body image A and B for obtaining two people identifies A's
Right-hand gesture is " OK " gesture, at this moment can merge processing to the interactive information of A, that is to say, that according to the integrated information of A,
A is generated in the input information of interactive system, so that it may start to interact behavior including interactive object agreement.
For ease of understanding, principle of the method in human-computer interaction application scenarios is illustrated below.
Fig. 3 is a kind of input information acquisition method for interactive system that this specification embodiment provides, and is answered in human-computer interaction
With the functional block diagram schematic diagram of the application system in scene.
As shown in figure 3, image acquisition units are RGBD camera, image processing unit is embedded board.Pass through
RGBD camera acquires the target image of interaction area, and at this moment target image just includes depth image and RGB image;It is embedded to open
Plate is sent out according to the background image being preset in embedded board, the target image received is handled, specifically, is based on
Depth image in target image carries out human bioequivalence, is partitioned into human body image from target image if recognizing human body
Come, and orient the hand images in human body image, and then obtains the hand figure in RGB image according to the positioning of hand images
Picture carries out gesture identification;Last embedded board is to the letter with intercorrelation obtained in the treatment process of target image
Breath is post-processed, and generates input information, and input information is sent to interactive system.
Fig. 4 is a kind of input information approach for interactive system that this specification embodiment provides, in human-computer interaction applied field
The flow chart of application system in scape.
As shown in figure 4, the input information approach of the interactive system of this specification embodiment the following steps are included:
Step S401 obtains the target image of interaction area by RGBD camera.
RGBD camera is a kind of RGB camera that can obtain depth value, by RGBD camera to interaction area into
Row shooting, obtains the deep video information and rgb video information of interaction area, and then therefrom obtain the target image of interaction area
When, it is available to target image include depth image and RGB image.
Target image is acquired by RGBD camera, reduces the overall power of interactive device, while can also
Hardware cost is effectively reduced.
Step S403 detects target by embedded board according to the background image defaulted in embedded board
Whether image includes human body image, if including human body image, enters step S405.
Wherein, background image can be is preset in embedded board in the form of background model;It can be based on depth image
Whether be used for quickly detecting includes human body image.
Step S405 obtains human figure according to preset human body segmentation's strategy by embedded board from target image
Picture.
Step S407 obtains hand images by embedded board from human body image, according to preset gesture identification
Strategy carries out gesture identification to hand images.
Whether step S409 includes that human body image, the human body image got and gesture are known according in target image
Other result generates the input information of interactive system.
Embodiment 3
Based on identical inventive concept, this specification embodiment also provide device corresponding with method, electronic equipment and
Nonvolatile computer storage media.
Detailed description had been carried out to method in view of in previous embodiment, in following example to the corresponding device of method,
Corresponding contents involved in electronic equipment and nonvolatile computer storage media will not be described in great detail.
Fig. 5 is a kind of structural schematic diagram of the input information acquisition device for interactive system that this specification embodiment provides.
As shown in figure 5, a kind of input information acquisition device for interactive system that this specification embodiment provides may include:
Image acquisition units 501 and image processing unit 502.Wherein, image acquisition units 501 obtain the target image of interaction area,
The interaction area is to first pass through the region of the calibration calibrated depth information of object in advance, and the target image includes comprising the depth
The depth image of information;Image processing unit 502 is based on institute according to the background image defaulted in described image processing unit
It states depth image and detects whether the target image includes foreground image, the background image is the background of the interaction area
Image;When the target image includes the foreground image, image processing unit 502 also to the target image at
Reason, and generate the input information of the interactive system.
Optionally, described image acquisition unit includes RGBD camera.
Optionally, the calibration object includes scaling board, and the gridiron pattern for calibration is provided on the scaling board.
Optionally, image processing unit 502 is not when the target image includes the foreground image, also by preset
Update background image described in policy update.
Optionally, the foreground image includes human body image, described at this time to detect the target based on the depth image
Whether image includes foreground image includes: whether detect in the target image based on the depth image include target person
Body image.
Optionally, when in the target image including target body image, image processing unit 502 is according to the mesh
Mark human body image occurs and/or disappears in the target image, corresponding the first input information for generating the interactive system, institute
The first input information is stated for characterizing using there is and/or disappear in the target body image in the target image, with
The information that the interactive system interacts.
Optionally, when in the target image including target body image, image processing unit 502 is according to preset
Human body segmentation's strategy, the target body image is split from the target image, and the institute obtained according to segmentation
Target body image is stated, the second of the interactive system is generated and inputs information, the second input information is divided for characterizing to utilize
The target body image cut, the information interacted with the interactive system.
Optionally, the target body image that image processing unit 502 is obtained also according to segmentation, obtains the target person
The hand images of body image, and according to the hand images, generate the third input information of the interactive system, the third
It inputs information and utilizes the hand images, the information interacted with the interactive system for characterizing.
Optionally, the target image further includes RGB image;The target body image obtained according to segmentation,
The step of obtaining the hand images of the target body image include:
According to the depth image in the target image, the hand position of the target body image is determined;
According to the hand position determined, the target body image is obtained from the corresponding RGB image
Hand images.
Optionally, image processing unit 502 carries out gesture to the hand images also according to preset gesture identification strategy
Identification, and according to gesture identification as a result, generating the 4th input information of the interactive system, the 4th input information is used for
Characterization is using the gesture identification as a result, the information interacted with the interactive system.
Optionally, image processing unit 502 is completed in the gesture of the hand images to all target body images
After identification, also according to the integrated information of each target body image, the 5th input information of the interactive system, institute are generated
It states the 5th input information to be used to characterize the integrated information using the target body image, be interacted with the interactive system
Information, the integrated information is for characterizing behavioural information of the corresponding target body of the target body image in interaction.
Based on the same inventive concept, this specification embodiment also provides a kind of for obtaining the input information of interactive system
Electronic equipment, including at least one processor and memory, the memory are stored with program, and are configured to by least one
A processor executes following steps:
The target image of interaction area is obtained by image acquisition units, the interaction area is to first pass through calibration object mark in advance
Determined the region of depth information, the target image includes the depth image comprising the depth information;
By image processing unit according to the background image defaulted in described image processing unit, it is based on the depth map
It whether include foreground image as detecting the target image, the background image is the background image of the interaction area;
When the target image includes the foreground image, by described image processing unit to the target image
It is handled, and generates the input information of the interactive system.
Based on the same inventive concept, this specification embodiment also provides a kind of for obtaining the input information of interactive system
Nonvolatile computer storage media, including the program being used in combination with electronic equipment, program can be executed by processor to complete
Following steps:
The target image of interaction area is obtained by image acquisition units, the interaction area is to first pass through calibration object mark in advance
Determined the region of depth information, the target image includes the depth image comprising the depth information;
By image processing unit according to the background image defaulted in described image processing unit, it is based on the depth map
It whether include foreground image as detecting the target image, the background image is the background image of the interaction area;
When the target image includes the foreground image, by described image processing unit to the target image
It is handled, and generates the input information of the interactive system.
It is above-mentioned that this specification specific embodiment is described.Other embodiments are in the scope of the appended claims
It is interior.In some cases, the movement recorded in detail in the claims or step can be come according to the sequence being different from embodiment
It executes and desired result still may be implemented.In addition, process depicted in the drawing not necessarily require show it is specific suitable
Sequence or consecutive order are just able to achieve desired result.In some embodiments, multitasking and parallel processing be also can
With or may be advantageous.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device,
For equipment, nonvolatile computer storage media embodiment, since it is substantially similar to the method embodiment, so the ratio of description
Relatively simple, the relevent part can refer to the partial explaination of embodiments of method.
Device that this specification embodiment provides, equipment, nonvolatile computer storage media with method be it is corresponding, because
This, device, equipment, nonvolatile computer storage media also have the advantageous effects similar with corresponding method, due to upper
Face is described in detail the advantageous effects of method, therefore, which is not described herein again corresponding intrument, equipment, it is non-easily
The advantageous effects of the property lost computer storage medium.
In the 1990s, the improvement of a technology can be distinguished clearly be on hardware improvement (for example,
Improvement to circuit structures such as diode, transistor, switches) or software on improvement (improvement for method flow).So
And with the development of technology, the improvement of current many method flows can be considered as directly improving for hardware circuit.
Designer nearly all obtains corresponding hardware circuit by the way that improved method flow to be programmed into hardware circuit.Cause
This, it cannot be said that the improvement of a method flow cannot be realized with hardware entities module.For example, programmable logic device
(Programmable Logic Device, PLD) (such as field programmable gate array (Field Programmable Gate
Array, FPGA)) it is exactly such a integrated circuit, logic function determines device programming by user.By designer
Voluntarily programming comes a digital display circuit " integrated " on a piece of PLD, designs and makes without asking chip maker
Dedicated IC chip.Moreover, nowadays, substitution manually makes IC chip, this programming is also used instead mostly " is patrolled
Volume compiler (logic compiler) " software realizes that software compiler used is similar when it writes with program development,
And the source code before compiling also write by handy specific programming language, this is referred to as hardware description language
(Hardware Description Language, HDL), and HDL is also not only a kind of, but there are many kind, such as ABEL
(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description
Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL
(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby
Hardware Description Language) etc., VHDL (Very-High-Speed is most generally used at present
Integrated Circuit Hardware Description Language) and Verilog.Those skilled in the art also answer
This understands, it is only necessary to method flow slightly programming in logic and is programmed into integrated circuit with above-mentioned several hardware description languages,
The hardware circuit for realizing the logical method process can be readily available.
Controller can be implemented in any suitable manner, for example, controller can take such as microprocessor or processing
The computer for the computer readable program code (such as software or firmware) that device and storage can be executed by (micro-) processor can
Read medium, logic gate, switch, specific integrated circuit (Application Specific Integrated Circuit,
ASIC), the form of programmable logic controller (PLC) and insertion microcontroller, the example of controller includes but is not limited to following microcontroller
Device: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicone Labs C8051F320 are deposited
Memory controller is also implemented as a part of the control logic of memory.It is also known in the art that in addition to
Pure computer readable program code mode is realized other than controller, can be made completely by the way that method and step is carried out programming in logic
Controller is obtained to come in fact in the form of logic gate, switch, specific integrated circuit, programmable logic controller (PLC) and insertion microcontroller etc.
Existing identical function.Therefore this controller is considered a kind of hardware component, and to including for realizing various in it
The device of function can also be considered as the structure in hardware component.Or even, it can will be regarded for realizing the device of various functions
For either the software module of implementation method can be the structure in hardware component again.
System, device, module or the unit that above-described embodiment illustrates can specifically realize by computer chip or entity,
Or it is realized by the product with certain function.It is a kind of typically to realize that equipment is computer.Specifically, computer for example may be used
Think personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media play
It is any in device, navigation equipment, electronic mail equipment, game console, tablet computer, wearable device or these equipment
The combination of equipment.
For convenience of description, it is divided into various units when description apparatus above with function to describe respectively.Certainly, implementing this
The function of each unit can be realized in the same or multiple software and or hardware when application.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
In a typical configuration, calculating equipment includes one or more processors (CPU), input/output interface, net
Network interface and memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/or
The forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable medium
Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method
Or technology come realize information store.Information can be computer readable instructions, data structure, the module of program or other data.
The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), moves
State random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electric erasable
Programmable read only memory (EEPROM), flash memory or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM),
Digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or other magnetic storage devices
Or any other non-transmission medium, can be used for storage can be accessed by a computing device information.As defined in this article, it calculates
Machine readable medium does not include temporary computer readable media (transitory media), such as the data-signal and carrier wave of modulation.
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability
It include so that the process, method, commodity or the equipment that include a series of elements not only include those elements, but also to wrap
Include other elements that are not explicitly listed, or further include for this process, method, commodity or equipment intrinsic want
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including described want
There is also other identical elements in the process, method of element, commodity or equipment.
The application can describe in the general context of computer-executable instructions executed by a computer, such as program
Module.Generally, program module includes routines performing specific tasks or implementing specific abstract data types, programs, objects, group
Part, data structure etc..The application can also be practiced in a distributed computing environment, in these distributed computing environments, by
Task is executed by the connected remote processing devices of communication network.In a distributed computing environment, program module can be with
In the local and remote computer storage media including storage equipment.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for system reality
For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method
Part explanation.
The above description is only an example of the present application, is not intended to limit this application.For those skilled in the art
For, various changes and changes are possible in this application.All any modifications made within the spirit and principles of the present application are equal
Replacement, improvement etc., should be included within the scope of the claims of this application.