CN110083243A - Exchange method, device, robot and readable storage medium storing program for executing based on camera - Google Patents
Exchange method, device, robot and readable storage medium storing program for executing based on camera Download PDFInfo
- Publication number
- CN110083243A CN110083243A CN201910356945.XA CN201910356945A CN110083243A CN 110083243 A CN110083243 A CN 110083243A CN 201910356945 A CN201910356945 A CN 201910356945A CN 110083243 A CN110083243 A CN 110083243A
- Authority
- CN
- China
- Prior art keywords
- images
- gestures
- gesture
- image
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 84
- 230000002452 interceptive effect Effects 0.000 claims abstract description 81
- 230000003993 interaction Effects 0.000 claims abstract description 46
- 230000008569 process Effects 0.000 claims abstract description 34
- 239000000463 material Substances 0.000 claims description 59
- 230000001815 facial effect Effects 0.000 claims description 44
- 238000012545 processing Methods 0.000 claims description 25
- 230000000694 effects Effects 0.000 claims description 24
- 238000001514 detection method Methods 0.000 claims description 11
- 230000003068 static effect Effects 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 5
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 230000006399 behavior Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 210000003813 thumb Anatomy 0.000 description 3
- 239000011521 glass Substances 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 238000002203 pretreatment Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/113—Recognition of static hand signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Abstract
The invention discloses a kind of exchange methods based on camera, this method comprises: obtaining first environment image in preset range by camera, and detect in the first environment image with the presence or absence of images of gestures;If there are images of gestures in the first environment image, type identification is carried out to the images of gestures, obtains the corresponding gesture-type of the images of gestures;Corresponding interactive instruction is determined according to the gesture-type, and corresponding interaction process is carried out according to the interactive instruction.The invention also discloses a kind of interactive device based on camera, robot and readable storage medium storing program for executing.Robot of the invention can carry out gesture identification by camera, and corresponding interaction process is carried out according to gesture, it solves interactive voice humanoid robot low technical problem of availability under noisy environment, is conducive to improve the received accuracy of interactive signal, improves the usage experience of user.
Description
Technical field
The present invention relates to field of artificial intelligence, more particularly to the exchange method based on camera, device, robot and
Readable storage medium storing program for executing.
Background technique
With the continuous development of science and technology and variation is also occurring for progress, various business activities, such as intelligentized
Retail stall, supermarket also gradually move towards reality from concept.Currently, having begun setting intelligent shopping guide in some supermarkets (or shop)
Robot meets the navigation needs of customer in a manner of through robot interactive.When being interacted with these robots, generally
Customer is needed to assign related interactive instruction to robot in a manner of voice, so that robot is instructed according to interactive voice carries out phase
The processing answered;However, when environment is more noisy, it will receive and recognize interactive voice instruction to robot and cause unfavorable shadow
It rings, can not the related needs of customer be handled and be fed back accurately and in time so as to cause robot, reduce robot
Availability.
Summary of the invention
It is a primary object of the present invention to propose a kind of exchange method based on camera, device, robot and readable deposit
Storage media, it is intended to solve interactive voice humanoid robot low technical problem of availability under noisy environment.
To achieve the above object, the embodiment of the present invention provides a kind of exchange method based on camera, described based on camera shooting
The exchange method of head is applied to robot, and the robot includes camera, and the exchange method based on camera includes:
First environment image in preset range is obtained by the camera, and detects and whether there is in the first environment image
Images of gestures;
If there are images of gestures in the first environment image, type identification is carried out to the images of gestures, obtains the hand
The corresponding gesture-type of gesture image;
Corresponding interactive instruction is determined according to the gesture-type, and corresponding interaction process is carried out according to the interactive instruction.
Optionally, the camera is depth camera, and the first environment image includes image depth information,
If there are images of gestures in the first environment image, type identification is carried out to the images of gestures, obtains institute
The step of stating images of gestures corresponding gesture-type include:
If in the first environment image, there are images of gestures, it is determined that the quantity of the images of gestures;
If the quantity of the images of gestures determines the hand of each images of gestures according to described image depth information in two or more
Gesture depth;
Target images of gestures is determined in each images of gestures according to the gesture depth of each images of gestures, and to the target hand
Gesture image carries out type identification, obtains the corresponding gesture-type of the target images of gestures.
Optionally, the first environment image includes a frame ambient image and two frame ambient images, the frame environment map
The acquisition time of picture prior to the two frames ambient image acquisition time,
If there are images of gestures in the first environment image, type identification is carried out to the images of gestures, obtains institute
The step of stating images of gestures corresponding gesture-type include:
If the frame ambient image and the two frames images of gestures have images of gestures, the frame ambient image is obtained
In a frame images of gestures, obtain two frame ambient images in two frame images of gestures;
Corresponding frame gesture type is obtained according to the frame images of gestures, is obtained according to the two frames images of gestures corresponding
Two frame gesture types, and judge whether the frame gesture type and the two frames gesture type are consistent;
If the frame gesture type is consistent with the two frames gesture type, the frame images of gestures is obtained in a frame
One frame hand gesture location of ambient image, obtains the two frames images of gestures in two frame hand gesture locations of the two frames ambient image,
And obtain the location track changing value of the frame hand gesture location and the two frames hand gesture location;
Judge whether the location track changing value is greater than preset threshold;
If whether the location track changing value is greater than preset threshold, it is determined that the frame ambient image and the two frames environment
The corresponding interaction gesture of image is dynamic gesture.
It is optionally, described to judge whether the location track changing value is greater than after the whether consistent step of preset threshold,
Further include:
If the location track changing value is less than or equal to preset threshold, it is determined that the frame ambient image and the two frames ring
Image corresponding interaction gesture in border is static gesture.
Optionally, the robot further includes display screen,
The exchange method based on camera further include:
Second environment image in preset range is obtained by the camera, and detects and whether there is in the second environment image
Facial image;
If there are facial images in the second environment image, special effect processing is carried out to the facial image, obtains special effect graph
Picture, and the special efficacy image is shown in the display screen.
Optionally, the robot includes default material database,
If there are facial images in the second environment image, special effect processing is carried out to the facial image, obtains spy
Image is imitated, and is shown the step of the special efficacy image in the display screen and includes:
If there are facial images in the second environment image, special efficacy material is obtained from the default material database, and
Special effect processing is carried out to the facial image based on the special efficacy material, obtains special efficacy image;
The special efficacy image is shown in the display screen.
Optionally, the exchange method based on camera further include:
When receiving the material more new command of cloud platform transmission, corresponding new material is obtained according to the material more new command,
And the default material database is updated according to the new material.
In addition, to achieve the above object, the embodiment of the present invention also provides a kind of interactive device based on camera, the base
Include: in the interactive device of camera
Gesture detection module for obtaining first environment image in preset range by camera, and detects the first environment
It whether there is images of gestures in image;
Type identification module, if carrying out class to the images of gestures for there are images of gestures in the first environment image
Type identification, obtains the corresponding gesture-type of the images of gestures;
Interaction process module, for determining corresponding interactive instruction according to the gesture-type, and according to the interactive instruction into
The corresponding interaction process of row.
In addition, to achieve the above object, the embodiment of the present invention also provides a kind of robot, and described includes camera, also wrap
It includes memory, processor and is stored in the interactive program that can be executed on the memory and by the processor, wherein the friendship
When mutual program is executed by the processor, realize such as the step of the above-mentioned exchange method based on camera.
In addition, to achieve the above object, the present invention also provides a kind of readable storage medium storing program for executing, being deposited on the readable storage medium storing program for executing
Interactive program is contained, wherein realizing such as the above-mentioned exchange method based on camera when the interactive program is executed by processor
The step of.
The present invention can make difference when needing and interacting with robot by the way that camera, user are arranged in robot
Gesture, robot then carries out gesture identification by camera, and carries out corresponding interaction process according to gesture, without user into
The cumbersome operation of row, it is stronger for the Interactive Experience under noisy environment, solve interactive voice humanoid robot under noisy environment
The low technical problem of availability is conducive to improve the received accuracy of interactive signal, improves the usage experience of user.
Detailed description of the invention
Fig. 1 is the hardware structural diagram for the robot that the embodiment of the present invention is related to;
Fig. 2 is that the present invention is based on the flow diagrams of the exchange method first embodiment of camera;
If Fig. 3 is that there are images of gestures in the first environment image described in Fig. 2, type knowledge is carried out to the images of gestures
Not, a refinement flow diagram of the corresponding gesture-type of the images of gestures is obtained;
If Fig. 4 is that there are images of gestures in the first environment image described in Fig. 2, type knowledge is carried out to the images of gestures
Not, another refinement flow diagram of the corresponding gesture-type of the images of gestures is obtained.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The present embodiments relate to the exchange method based on camera be mainly used in the interactive device based on camera,
The interactive device based on camera can be described as robot, the correlation function with data sampling and processing and output.
As shown in Figure 1, Fig. 1 is the hardware structural diagram for the robot that the embodiment of the present invention is related to.
As shown in Figure 1, robot may include processor 1001(such as CPU), communication bus 1002, user interface
1003, network interface 1004, memory 1005.Wherein, communication bus 1002 is for realizing the connection communication between these components;
User interface 1003 may include display screen (Display), input unit such as keyboard (Keyboard), camera;Network connects
Mouth 1004 may include optionally standard wireline interface and wireless interface (such as WI-FI interface);Memory 1005 can be high speed
RAM memory is also possible to stable memory (non-volatile memory), such as magnetic disk storage, memory 1005
It optionally can also be the storage device independently of aforementioned processor 1001.It will be understood by those skilled in the art that being shown in Fig. 1
Hardware configuration and do not constitute a limitation of the invention, may include than illustrating more or fewer components, or combination is certain
Component or different component layouts.
With continued reference to Fig. 1, the memory 1005 in Fig. 1 as a kind of computer readable storage medium may include operation system
System, network communication module and interactive program.In Fig. 1, network communication module be mainly used for connect cloud platform (or server,
Terminal), data communication is carried out with cloud platform (or server, terminal);And processor 1001 can be called and be deposited in memory 1005
The interactive program of storage, and execute following steps:
First environment image in preset range is obtained by the camera, and detects and whether there is in the first environment image
Images of gestures;
If there are images of gestures in the first environment image, type identification is carried out to the images of gestures, obtains the hand
The corresponding gesture-type of gesture image;
Corresponding interactive instruction is determined according to the gesture-type, and corresponding interaction process is carried out according to the interactive instruction.
Further, the camera is depth camera, and the first environment image includes image depth information,
If there are images of gestures in the first environment image, type identification is carried out to the images of gestures, obtains institute
The step of stating images of gestures corresponding gesture-type include:
If in the first environment image, there are images of gestures, it is determined that the quantity of the images of gestures;
If the quantity of the images of gestures determines the hand of each images of gestures according to described image depth information in two or more
Gesture depth;
Target images of gestures is determined in each images of gestures according to the gesture depth of each images of gestures, and to the target hand
Gesture image carries out type identification, obtains the corresponding gesture-type of the target images of gestures.
Further, the first environment image includes a frame ambient image and two frame ambient images, the frame environment
The acquisition time of image prior to the two frames ambient image acquisition time,
If there are images of gestures in the first environment image, type identification is carried out to the images of gestures, obtains institute
The step of stating images of gestures corresponding gesture-type include:
If the frame ambient image and the two frames images of gestures have images of gestures, the frame ambient image is obtained
In a frame images of gestures, obtain two frame ambient images in two frame images of gestures;
Corresponding frame gesture type is obtained according to the frame images of gestures, is obtained according to the two frames images of gestures corresponding
Two frame gesture types, and judge whether the frame gesture type and the two frames gesture type are consistent;
If the frame gesture type is consistent with the two frames gesture type, the frame images of gestures is obtained in a frame
One frame hand gesture location of ambient image, obtains the two frames images of gestures in two frame hand gesture locations of the two frames ambient image,
And obtain the location track changing value of the frame hand gesture location and the two frames hand gesture location;
Judge whether the location track changing value is greater than preset threshold;
If whether the location track changing value is greater than preset threshold, it is determined that the frame ambient image and the two frames environment
The corresponding interaction gesture of image is dynamic gesture.
Further, the processor 1001 may call upon the interactive program stored in memory 1005, and execute with
Lower step:
If the location track changing value is less than or equal to preset threshold, it is determined that the frame ambient image and the two frames ring
Image corresponding interaction gesture in border is static gesture.
Further, the robot further includes display screen, and processor 1001 may call upon to be stored in memory 1005
Interactive program, and execute following steps:
Second environment image in preset range is obtained by the camera, and detects and whether there is in the second environment image
Facial image;
If there are facial images in the second environment image, special effect processing is carried out to the facial image, obtains special effect graph
Picture, and the special efficacy image is shown in the display screen.
Further, the robot includes default material database,
If there are facial images in the second environment image, special effect processing is carried out to the facial image, obtains spy
Image is imitated, and is shown the step of the special efficacy image in the display screen and includes:
If there are facial images in the second environment image, special efficacy material is obtained from the default material database, and
Special effect processing is carried out to the facial image based on the special efficacy material, obtains special efficacy image;
The special efficacy image is shown in the display screen.
Further, processor 1001 may call upon the interactive program stored in memory 1005, and execute following step
It is rapid:
When receiving the material more new command of cloud platform transmission, corresponding new material is obtained according to the material more new command,
And the default material database is updated according to the new material.
Based on above-mentioned hardware structure, propose that the present invention is based on each embodiments of the exchange method of camera.
The embodiment of the invention provides a kind of exchange methods based on camera.
It is that the present invention is based on the flow diagrams of the exchange method first embodiment of camera referring to Fig. 2, Fig. 2.
In the present embodiment, the exchange method based on camera is applied to robot, and the robot includes camera,
The exchange method based on camera includes:
Step S10 obtains first environment image in preset range by the camera, and detects in the first environment image
With the presence or absence of images of gestures;
Setting interactive voice humanoid robot is had begun in the supermarket (or shop) having at present, in a manner of through robot interactive
Meet the navigation needs of customer.When interacting with these interactive voice humanoid robots, customer is generally required with the side of voice
Formula assigns related interactive instruction to robot, so that robot is performed corresponding processing according to interactive voice instruction;However, working as ring
When border is more noisy, it will to robot receive and recognize interactive voice instruction adversely affect, so as to cause robot without
Method is handled and is fed back to the related needs of customer accurately and in time, reduces the availability of robot.In this regard, this implementation
A kind of exchange method based on camera is proposed in example, camera is set in robot, and user is needing to carry out with robot
Different gestures can be made when interaction, robot then carries out gesture identification by camera, and carries out corresponding friendship according to gesture
Mutually processing, cumbersome operation is carried out without user, stronger for the Interactive Experience under noisy environment, solves interactive voice type machine
Device people low technical problem of availability under noisy environment is conducive to improve the received accuracy of interactive signal, improves user
Usage experience.
The present embodiment is applied to robot based on the exchange method of camera, which can be according to reality
Situation is configured.The robot may be disposed at the public domains such as market, supermarket, and relevant mobile dress can be arranged in robot
It sets, so that robot can be moved by the mobile device, carries out daily cruise task;Robot includes camera,
To obtain ambient image, for convenience of description, the ambient image in the present embodiment can be described as first environment image.Robot is logical
When crossing camera acquisition first environment image, which can be detected, to detect in the first environment image
With the presence or absence of images of gestures, so that judging whether there is user is desired with interaction.
Step S20 carries out type knowledge to the images of gestures if there are images of gestures in the first environment image
Not, the corresponding gesture-type of the images of gestures is obtained;
In the present embodiment, there are when images of gestures in detecting the first environment image, it may be determined that when carrying out image acquisition
There is user to make relevant gesture in the preset range of the camera shooting of camera, namely determination has user to be desired with interaction.This
When robot can to images of gestures carry out type identification, with obtain the images of gestures for gesture-type.Wherein, for machine
People carries out the process of type identification to images of gestures, can be and is realized by machine learning model, such as can be by preparatory
The depth network implementations that training obtains.Specifically, several each images of gestures samples can be collected in advance, these images of gestures sample marks
It is marked with its corresponding sample type;Then initial network can be established, and initial network is carried out by these images of gestures samples
Training, until the type identification ability (or being classification capacity) of the initial network reaches certain standard, i.e., it is believed that having trained
At obtaining the depth network to identify gesture-type;And for the type identification ability of the initial network (or for classification energy
Power) judgement that whether reaches certain standard, then it can be accomplished in several ways, such as can be the side by calculating loss
Formula, or be by calculating classification accuracy etc., details are not described herein again;And when obtaining the depth network, that is, can be used should
Depth network carries out type identification to the images of gestures in first environment image, and gesture feature is extracted from images of gestures (should
Gesture feature can be indicated with modes such as vector, matrixes), then according to the gesture feature determine images of gestures corresponding to gesture
Type.For example, the gesture be thumb up thumb up gesture, the gesture etc. of " OK ".
It is worth noting that in order to improve the efficiency of images of gestures type identification, it, can also be first to before being identified
One ambient image carries out relevant pretreatment.Specifically, can first split images of gestures from first environment image, and
The standardization of size, the unification of pixel value (such as linear function normalization, logarithmic function are carried out to the images of gestures of segmentation
Normalization) etc., for example, carrying out pretreated images of gestures format is that 224*224*3(3 refers to RGB triple channel).Certainly, in reality
Other pretreatments can also be carried out in border as needed.
Step S30 determines corresponding interactive instruction according to the gesture-type, and is carried out accordingly according to the interactive instruction
Interaction process.
In the present embodiment, robot is when determining gesture-type, it will inquiry and the gesture-type from preset instructions library
Corresponding interactive instruction.Wherein, record has friendship corresponding to several gesture-types and these gesture-types in the preset instructions library
Mutually instruction.When obtaining interactive instruction corresponding with gesture-type, it is believed that the gesture behavior of active user triggers the interaction
Instruction;Robot will carry out mutual interaction process according to the interaction at this time.For example, lamp can be arranged in robot shells part
Band (such as in four sides of robot display screen setting, one circle light bar or setting in other regions), and for thumbing up
Thumb up corresponding to gesture is that light bar flashes instruction;Robot is determining that gesture-type is thumbed up when thumbing up gesture,
It will determine that interactive instruction is that light bar flashes instruction, and flashes instruction according to the light bar and light light bar in glittering mode, so that lamp
Band flashes.In another example robot includes display screen, the corresponding gesture of " OK " is photographing instruction;Robot is determining gesture class
When type is the gesture of " OK ", it will determine that interactive instruction is photographing instruction, and call camera to take pictures according to the photographing instruction;
And take pictures complete when, robot can show the corresponding two dimensional code of the photo on a display screen, so that user's barcode scanning obtains the photograph
Piece;When robot receives user terminal based on photo acquisition request transmitted by the two dimensional code is scanned, can will take pictures institute
The photo obtained is sent to the user terminal, and can also carry out relevant charge processing before sending certainly., it is understood that in addition to upper
It is outer to state citing, more gesture-types and interactive instruction can also be set in practice, to meet different interaction demands.
Further, it in order to cause concern of the user (customer) to robot in market, is also used in and detects that user deposits
When actively issue relevant interaction process, to improve interaction effect.Specifically, the robot also wraps in the present embodiment
Include display screen, the exchange method based on camera further include:
Second environment image in preset range is obtained by the camera, and detects and whether there is in the second environment image
Facial image;
In the present embodiment, robot, which also can be used, obtains ambient image by camera, and detects and whether there is in the ambient image
Facial image.To indicate convenient, which can be described as whether depositing in second environment image, namely detection second environment image
In facial image.If facial image is not present in the second environment image, it is believed that current temporarily inspection of the no user in robot
It surveys in range;And if there are facial images in the second environment image, it is believed that currently having user in the detection model of robot
In enclosing, respective handling can be carried out to the image detected at this time.Wherein, the process of facial image detection is also possible to pass through depth
What the mode of degree network was realized, and be used in the specific training process and step S20 of the depth network of facial image detection be used for
The training process of the depth network of images of gestures detection is similar, and (sample used in certain the two is different, and the two depth
The specific structure of network can it is identical, can also be different), details are not described herein again;, it is understood that other than depth network,
It can also carry out Face datection by another way in practice.
If there are facial images in the second environment image, special effect processing is carried out to the facial image, obtains spy
Image is imitated, and shows the special efficacy image in the display screen.
In the present embodiment, if there are facial images in the second environment image, it is believed that currently having user in robot
Detection range in, robot can to the facial image carry out special effect processing, obtain special efficacy image.Specifically, if second ring
There are facial image in the image of border, pixel value mutation can be first passed through or other modes position human face region (image),
Determine the relative position of human face region (image) in second environment image;In the relative position for determining human face region (image)
When, pre-set special efficacy stage property can be added around the relative position or in human face region and (such as addition glasses, add
Add beard, addition earrings, addition picture photo frame etc.), or to human face region (image) carry out effect of shadow (as change contrast with
Realize whitening effect etc.).When obtaining special efficacy image, robot will show the special efficacy image in display screen, for user into
Row is checked.And while detecting facial image, robot can be sent out relevant greeting voice, and (certain robot also needs
Audio output modules greet voice to issue including loudspeaker etc.), to attract the concern of user.It is worth noting that when detecting
There are facial image in second environment, and when the quantity of facial image is greater than or equal to two, robot can be simultaneously to these
Facial image carries out special effect processing simultaneously, to realize one-to-many treatment effect and interaction effect, is conducive to improve user's body
It tests.
Further, robot can also include a default material database, be stored with several spies in the default material database
Imitate material.There are when facial image, can be to preset from this to obtain relevant special efficacy in material database in detecting second environment
Material (such as glasses, beard, earrings), and special effect processing is carried out to facial image based on these special efficacy materials, obtain special effect graph
Then picture shows the special efficacy image in display screen.
In practice, for the material content of above-mentioned default material database, it can also be continuous renewal, to guarantee element
The freshness of material.Specifically, robot can be connect with cloud platform (Cloud Server), so as to carry out data friendship with cloud platform
Mutually.And in practice, which can connect with multiple robots in market simultaneously, realize the unified management to robot.
When cloud platform makes or gets new material otherwise, material more new command, the material can be sent to robot
More new command is the material content for including new material, be also possible to include new material storage address;And if including new element
It can also include the concerned right information for accessing the storage address, so that robot is based on the correlative weight when storage address of material
The access authority of limit information acquisition storage address;Robot, can be according to the material more when receiving the material more new command
New command obtains corresponding new material, and updates default material database according to the new material;And at no point in the update process, may further include
The operation such as deletion, replacement of old material.In addition, can also be carried out for gesture-type and the corresponding interactive instruction of gesture-type
Corresponding to update, specific renewal process is similar with the renewal process of special efficacy material, i.e., new gesture and/or friendship are had updated in cloud platform
Mutually when instruction, gesture and/or interactive instruction more new command can be sent to robot, so that robot carries out gesture and/interaction refers to
It enables and updates operation.
In the present embodiment, robot obtains first environment image in preset range by camera, and detects described first
It whether there is images of gestures in ambient image;If there are images of gestures in the first environment image, to the images of gestures
Type identification is carried out, the corresponding gesture-type of the images of gestures is obtained;Determine that corresponding interaction refers to according to the gesture-type
It enables, and corresponding interaction process is carried out according to the interactive instruction.In the above manner, camera is arranged in robot, use
Family can make different gestures when needing and interacting from robot, and robot then carries out gesture identification by camera, and
Corresponding interaction process is carried out according to gesture, cumbersome operation is carried out without user, more for the Interactive Experience under noisy environment
By force, it solves interactive voice humanoid robot low technical problem of availability under noisy environment, is conducive to raising interactive signal and connects
The accuracy of receipts improves the usage experience of user.
Referring to Fig. 3, if Fig. 3 is that there are images of gestures in the first environment image described in Fig. 2, to the images of gestures
Type identification is carried out, a refinement flow diagram of the corresponding gesture-type of the images of gestures is obtained.
Based on above-mentioned embodiment illustrated in fig. 2, the camera is depth camera, and the first environment image includes image
Depth information, the step S20 include:
Step S21, if there are images of gestures in the first environment image, it is determined that the quantity of the images of gestures;
The gesture behavior carried out in the image pickup scope of uniform machinery people simultaneously in view of might have multiple users in practice,
Therefore robot needs determine the gesture behavior fed back in this multiple gesture.Specifically, robot
Camera is that (for example, RGB-D depth camera or binocular depth camera etc. can measure picture depth to depth camera
Camera), include image depth information in the first environment image got by the depth camera, the picture depth
Information can reflect the depth of environment shot, image objects, namely reflection object is carved when shooting and depth camera (machine
People) distance.Robot, there are when images of gestures, also will further determine the hand in the first environment image for detecting acquisition
The quantity of gesture image, namely determine and carry out the number of users of gesture behavior in synchronization.
Step S22, if the quantity of the images of gestures determines each in two or more according to described image depth information
The gesture depth of images of gestures;
In the present embodiment, if the quantity of images of gestures be one, can directly the images of gestures carry out gesture-type identification and and
Subsequent interaction process.And if the quantity of images of gestures be it is more than two (herein " and more than " include this number, similarly hereinafter), then need root
The gesture depth of each images of gestures is determined according to the image depth information that first environment image includes, namely determines each gesture
(or the user for making the gesture) carves at a distance from depth camera (robot) when shooting.
Step S23 determines target images of gestures according to the gesture depth of each images of gestures in each images of gestures, and
Type identification is carried out to the target images of gestures, obtains the corresponding gesture-type of the target images of gestures.
In the present embodiment, in the gesture depth for obtaining each images of gestures, robot will be according to each images of gestures
Gesture depth determines a target images of gestures in each images of gestures, such as can be the smallest gesture of gesture depth
Image is determined as target images of gestures, is also that the gesture institute that shooting time is nearest at a distance from depth camera (robot) is right
The images of gestures answered is determined as target images of gestures;Then type identification can be carried out to the target images of gestures, obtains target hand
The corresponding gesture-type of gesture image.Specific gesture identification process is shown in above-mentioned steps S20, and details are not described herein again.
In the above manner, in the gesture behavior that multiple users carry out in the image pickup scope of uniform machinery people simultaneously,
Robot can be determined target images of gestures from multiple images of gestures by way of picture depth and carry out subsequent operation, be avoided
The multi-trigger the case where can not normal response and the case where feedback, be conducive to the stability for improving interaction process, improve
The usage experience of user.
Referring to Fig. 4, if Fig. 4 is that there are images of gestures in the first environment image described in Fig. 2, to the images of gestures
Type identification is carried out, another refinement flow diagram of the corresponding gesture-type of the images of gestures is obtained.
In the present embodiment, the first environment image includes a frame ambient image and two frame ambient images, the frame ring
Prior to the acquisition time of the two frames ambient image, the step S20 includes: the acquisition time of border image
Step S24 obtains a frame if the frame ambient image and the two frames images of gestures have images of gestures
A frame images of gestures in ambient image obtains two frame images of gestures in two frame ambient images;
In the present embodiment, in order to enrich the type of gesture, the identifiable gesture of robot includes static gesture and dynamic gesture, quiet
State gesture be include the constant gesture of hand gestures in shooting process, dynamic gesture be in shooting process hand gestures can become
The gesture (such as palm wave gesture of waving) of change.Specifically, first environment image captured by camera includes at least two
Frame, for convenience of description, the present embodiment are illustrated by taking two frames as an example, this two field pictures can be referred to as a frame ambient image and two
Frame ambient image, the acquisition time of the acquisition time of a frame ambient image prior to the two frames ambient image.Robot is obtaining
When one frame ambient image and two frame ambient images, it will two frame ambient images are carried out with the detection of images of gestures.If only at it
In detect images of gestures in a frame image, then it is believed that the corresponding interaction gesture of the images of gestures is static gesture, at this time may be used
The direct images of gestures carries out the processing such as relevant gesture-type identification.And if detecting gesture figure in two field pictures image
Picture can then obtain the frame images of gestures in a frame ambient image, and obtain two frame images of gestures in two frame ambient images.
Step S25 obtains corresponding frame gesture type according to the frame images of gestures, according to the two frames gesture figure
As obtaining corresponding two frames gesture type, and judge whether the frame gesture type and the two frames gesture type are consistent;
In the present embodiment, when obtaining a frame images of gestures and two frame images of gestures, robot can analyze first obtains this two frame
The corresponding gesture type of images of gestures institute obtains corresponding frame gesture type according to a frame images of gestures, according to two frames
Images of gestures obtains corresponding two frames gesture type, and judge the frame gesture type and the two frames gesture type whether one
It causes, namely analyzes and determine this corresponding hand gestures of two frames images of gestures.The process of the analysis can be through predetermined depth net
What network was realized, i.e., each images of gestures is extracted by predetermined depth network, obtains the corresponding gesture feature of each images of gestures
(a frame gesture feature and two frame gesture features, and for the frame gesture feature and two frame gesture features, it can be and pass through vector
Mode be indicated, naturally it is also possible to identify otherwise, such as matrix), further according to gesture feature determine respectively it is right
The gesture type answered;And the specific introduction of the predetermined depth network can be found in above-mentioned steps S20.Obtain a frame gesture type and
When two frame gesture types, the two can be compared, judge whether the two is consistent.If the two is consistent, S26 is entered step;If
The two is inconsistent, then it is believed that two frame ambient images have corresponded to different gestures, can only take frame ambient image therein at this time
As effective environment image, and according to its corresponding gesture-type of the effective environment image analysis, and then carry out at subsequent interaction
Reason.
Step S26 obtains the frame gesture figure if the frame gesture type is consistent with the two frames gesture type
As the frame hand gesture location in the frame ambient image, the two frames images of gestures is obtained the two of the two frames ambient image
Frame hand gesture location, and obtain the location track changing value of the frame hand gesture location and the two frames hand gesture location;
In the present embodiment, if a frame gesture type is consistent with two frame gesture types, robot will further obtain a frame gesture
Image a frame ambient image a frame hand gesture location and obtain two frame images of gestures in two frame gesture positions of two frame ambient images
It sets;Then the location track changing value of a frame hand gesture location and two frame hand gesture locations is obtained, namely obtains and is shooting two frame environment
The hand institute trail change of user during image.And for the frame hand gesture location and two frame hand gesture locations, it can be basis
The corresponding gesture feature of the two determines;And for location track changing value between the two, then it can be by between feature
Variation relation determines, such as when two gesture features are indicated in a manner of vector, which can be two
Difference, linear changing relation of vector of a vector etc. are determined.
Step S27, judges whether the location track changing value is greater than preset threshold;
When obtaining location track changing value, the position can be judged by the location track changing value compared with a preset threshold
Whether trail change value is greater than the preset threshold.
Step S28, if whether the location track changing value is greater than preset threshold, it is determined that the frame ambient image and
The corresponding interaction gesture of the two frames ambient image is dynamic gesture.
If the location track changing value is greater than the preset threshold, it is determined that a frame ambient image and two frame ambient images are corresponding
Interaction gesture be dynamic gesture, at this time will according to a frame gesture type (or two frame gesture types) and location track changing value tool
Body determines gesture classification corresponding to the frame ambient image and two frame ambient images, with the corresponding interactive instruction of determination, and root
Subsequent processing is carried out according to the interactive instruction.And if if being somebody's turn to do the location track changing value less than or equal to the preset threshold, really
A fixed frame ambient image and the corresponding interaction gesture of two frame ambient images are static gesture, at this time can be directly according to a frame environment map
Picture (or two frame gesture types) determines gesture classification corresponding to the frame ambient image and two frame ambient images, is corresponded to determining
Interactive instruction, and subsequent processing is carried out according to the interactive instruction.
In the present embodiment, first environment image captured by camera includes at least two field pictures, and is carrying out gesture knowledge
It is static gesture or dynamic gesture that the gesture will be also distinguished when other, so that the type of rich interactive gesture, supports more hand over
Mutual mode is conducive to the usage experience for improving user to meet more interaction demands.
In addition, the embodiment of the present invention also provides a kind of interactive device based on camera.
The interactive device based on camera in the embodiment of the present invention includes:
Gesture detection module for obtaining first environment image in preset range by camera, and detects the first environment
It whether there is images of gestures in image;
Type identification module, if carrying out class to the images of gestures for there are images of gestures in the first environment image
Type identification, obtains the corresponding gesture-type of the images of gestures;
Interaction process module, for determining corresponding interactive instruction according to the gesture-type, and according to the interactive instruction into
The corresponding interaction process of row.
Further, the camera is depth camera, and the type identification module includes:
Quantity determination unit, if for there are images of gestures in the first environment image, it is determined that the number of the images of gestures
Amount;
Depth determining unit, it is true according to described image depth information if the quantity for the images of gestures is in two or more
The gesture depth of fixed each images of gestures;
Type identification unit determines target gesture figure for the gesture depth according to each images of gestures in each images of gestures
Picture, and type identification is carried out to the target images of gestures, obtain the corresponding gesture-type of the target images of gestures.
Further, the first environment image includes a frame ambient image and two frame ambient images, the frame environment
Acquisition time of the acquisition time of image prior to the two frames ambient image, the type identification module further include:
Image acquisition unit obtains institute if the frame ambient image and the two frames images of gestures have images of gestures
The frame images of gestures in a frame ambient image is stated, two frame images of gestures in two frame ambient images are obtained;
Type judging unit, for obtaining corresponding frame gesture type according to the frame images of gestures, according to two frame
Images of gestures obtains corresponding two frames gesture type, and judge the frame gesture type and the two frames gesture type whether one
It causes;
Position acquisition unit obtains a frame if consistent with the two frames gesture type for the frame gesture type
Images of gestures obtains the two frames images of gestures in the two frames environment map in a frame hand gesture location of the frame ambient image
Two frame hand gesture locations of picture, and obtain the location track changing value of the frame hand gesture location and the two frames hand gesture location;
Position judging unit, for judging whether the location track changing value is greater than preset threshold;
First determination unit, if whether being greater than preset threshold for the location track changing value, it is determined that the frame environment
Image and the corresponding interaction gesture of the two frames ambient image are dynamic gesture.
Further, the type identification module further include:
Second determination unit, if being less than or equal to preset threshold for the location track changing value, it is determined that the frame ring
Border image and the corresponding interaction gesture of the two frames ambient image are static gesture.
Further, the interactive device based on camera further include:
Face detection module for obtaining second environment image in preset range by the camera, and detects described second
It whether there is facial image in ambient image;
Characteristic processing module, if being carried out to the facial image special for there are facial images in the second environment image
Effect processing, obtains special efficacy image, and the special efficacy image is shown in display screen.
Further, the characteristic processing module, if specifically in the second environment image there are facial image,
Special efficacy material is obtained from the default material database, and special effect processing is carried out to the facial image based on the special efficacy material,
Obtain special efficacy image;The special efficacy image is shown in the display screen.
Further, the interactive device based on camera further include:
Material update module, for receive cloud platform transmission material more new command when, according to the material more new command
Corresponding new material is obtained, and the default material database is updated according to the new material.
Wherein, the function of modules is realized and the above-mentioned friendship based on camera in the above-mentioned interactive device based on camera
Each step is corresponding in mutual embodiment of the method, and function and realization process no longer repeat one by one here.
In addition, the embodiment of the present invention also provides a kind of robot.
Robot in the embodiment of the present invention includes camera, further includes memory, processor and is stored in the storage
On device and the interactive program that can be executed by the processor, realize when the interactive program is executed by the processor as above-mentioned
The step of exchange method based on camera.
Wherein, the method realized when interactive program is executed by processor can refer to that the present invention is based on the interaction sides of camera
Each embodiment of method, details are not described herein again.
The embodiment of the present invention also provides a kind of readable storage medium storing program for executing.
Readable storage medium storing program for executing in the embodiment of the present invention is stored with interactive program, when the interactive program is executed by processor
It realizes such as the step of the above-mentioned exchange method based on camera.
Wherein, the method realized when interactive program is executed by processor can refer to that the present invention is based on the interaction sides of camera
Each embodiment of method, details are not described herein again.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the system that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or system institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or system.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in one as described above
In storage medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that terminal device (it can be mobile phone,
Computer, server, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of exchange method based on camera, which is characterized in that the exchange method based on camera is applied to machine
People, the robot include camera, and the exchange method based on camera includes:
First environment image in preset range is obtained by the camera, and detects and whether there is in the first environment image
Images of gestures;
If there are images of gestures in the first environment image, type identification is carried out to the images of gestures, obtains the hand
The corresponding gesture-type of gesture image;
Corresponding interactive instruction is determined according to the gesture-type, and corresponding interaction process is carried out according to the interactive instruction.
2. as described in claim 1 based on the exchange method of camera, which is characterized in that the camera is depth camera
Head, the first environment image includes image depth information,
If there are images of gestures in the first environment image, type identification is carried out to the images of gestures, obtains institute
The step of stating images of gestures corresponding gesture-type include:
If in the first environment image, there are images of gestures, it is determined that the quantity of the images of gestures;
If the quantity of the images of gestures determines the hand of each images of gestures according to described image depth information in two or more
Gesture depth;
Target images of gestures is determined in each images of gestures according to the gesture depth of each images of gestures, and to the target hand
Gesture image carries out type identification, obtains the corresponding gesture-type of the target images of gestures.
3. as described in claim 1 based on the exchange method of camera, which is characterized in that the first environment image includes one
Frame ambient image and two frame ambient images, the acquisition of the acquisition time of the frame ambient image prior to the two frames ambient image
Time,
If there are images of gestures in the first environment image, type identification is carried out to the images of gestures, obtains institute
The step of stating images of gestures corresponding gesture-type include:
If the frame ambient image and the two frames images of gestures have images of gestures, the frame ambient image is obtained
In a frame images of gestures, obtain two frame ambient images in two frame images of gestures;
Corresponding frame gesture type is obtained according to the frame images of gestures, is obtained according to the two frames images of gestures corresponding
Two frame gesture types, and judge whether the frame gesture type and the two frames gesture type are consistent;
If the frame gesture type is consistent with the two frames gesture type, the frame images of gestures is obtained in a frame
One frame hand gesture location of ambient image, obtains the two frames images of gestures in two frame hand gesture locations of the two frames ambient image,
And obtain the location track changing value of the frame hand gesture location and the two frames hand gesture location;
Judge whether the location track changing value is greater than preset threshold;
If whether the location track changing value is greater than preset threshold, it is determined that the frame ambient image and the two frames environment
The corresponding interaction gesture of image is dynamic gesture.
4. as claimed in claim 3 based on the exchange method of camera, which is characterized in that the judgement location track becomes
Whether change value is greater than after the whether consistent step of preset threshold, further includes:
If the location track changing value is less than or equal to preset threshold, it is determined that the frame ambient image and the two frames ring
Image corresponding interaction gesture in border is static gesture.
5. according to any one of claims 1 to 4 based on the exchange method of camera, which is characterized in that the robot
It further include display screen,
The exchange method based on camera further include:
Second environment image in preset range is obtained by the camera, and detects and whether there is in the second environment image
Facial image;
If there are facial images in the second environment image, special effect processing is carried out to the facial image, obtains special effect graph
Picture, and the special efficacy image is shown in the display screen.
6. as claimed in claim 5 based on the exchange method of camera, which is characterized in that the robot includes default material
Library,
If there are facial images in the second environment image, special effect processing is carried out to the facial image, obtains spy
Image is imitated, and is shown the step of the special efficacy image in the display screen and includes:
If there are facial images in the second environment image, special efficacy material is obtained from the default material database, and
Special effect processing is carried out to the facial image based on the special efficacy material, obtains special efficacy image;
The special efficacy image is shown in the display screen.
7. as claimed in claim 6 based on the exchange method of camera, which is characterized in that the interaction side based on camera
Method further include:
When receiving the material more new command of cloud platform transmission, corresponding new material is obtained according to the material more new command,
And the default material database is updated according to the new material.
8. a kind of interactive device based on camera, which is characterized in that the interactive device based on camera includes:
Gesture detection module for obtaining first environment image in preset range by camera, and detects the first environment
It whether there is images of gestures in image;
Type identification module, if carrying out class to the images of gestures for there are images of gestures in the first environment image
Type identification, obtains the corresponding gesture-type of the images of gestures;
Interaction process module, for determining corresponding interactive instruction according to the gesture-type, and according to the interactive instruction into
The corresponding interaction process of row.
9. a kind of robot, which is characterized in that described includes camera, further includes memory, processor and is stored in described deposit
On reservoir and the interactive program that can be executed by the processor, wherein being realized when the interactive program is executed by the processor
The step of exchange method based on camera as described in any one of claims 1 to 7.
10. a kind of readable storage medium storing program for executing, which is characterized in that interactive program is stored on the readable storage medium storing program for executing, wherein described
When interactive program is executed by processor, realizing the exchange method based on camera as described in any one of claims 1 to 7
Step.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910356945.XA CN110083243A (en) | 2019-04-29 | 2019-04-29 | Exchange method, device, robot and readable storage medium storing program for executing based on camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910356945.XA CN110083243A (en) | 2019-04-29 | 2019-04-29 | Exchange method, device, robot and readable storage medium storing program for executing based on camera |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110083243A true CN110083243A (en) | 2019-08-02 |
Family
ID=67417836
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910356945.XA Pending CN110083243A (en) | 2019-04-29 | 2019-04-29 | Exchange method, device, robot and readable storage medium storing program for executing based on camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110083243A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110434853A (en) * | 2019-08-05 | 2019-11-12 | 北京云迹科技有限公司 | A kind of robot control method, device and storage medium |
CN111126279A (en) * | 2019-12-24 | 2020-05-08 | 深圳市优必选科技股份有限公司 | Gesture interaction method and gesture interaction device |
CN111949134A (en) * | 2020-08-28 | 2020-11-17 | 深圳Tcl数字技术有限公司 | Human-computer interaction method, device and computer-readable storage medium |
CN112306235A (en) * | 2020-09-25 | 2021-02-02 | 北京字节跳动网络技术有限公司 | Gesture operation method, device, equipment and storage medium |
CN113534944A (en) * | 2020-04-13 | 2021-10-22 | 百度在线网络技术(北京)有限公司 | Service feedback method, service feedback device, electronic equipment and storage medium |
CN114419694A (en) * | 2021-12-21 | 2022-04-29 | 珠海视熙科技有限公司 | Processing method and processing device for head portrait of multi-person video conference |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130343611A1 (en) * | 2011-03-04 | 2013-12-26 | Hewlett-Packard Development Company, L.P. | Gestural interaction identification |
CN104750252A (en) * | 2015-03-09 | 2015-07-01 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN106648078A (en) * | 2016-12-05 | 2017-05-10 | 北京光年无限科技有限公司 | Multimode interaction method and system applied to intelligent robot |
CN107688779A (en) * | 2017-08-18 | 2018-02-13 | 北京航空航天大学 | A kind of robot gesture interaction method and apparatus based on RGBD camera depth images |
CN108594997A (en) * | 2018-04-16 | 2018-09-28 | 腾讯科技(深圳)有限公司 | Gesture framework construction method, apparatus, equipment and storage medium |
CN109472764A (en) * | 2018-11-29 | 2019-03-15 | 广州市百果园信息技术有限公司 | Method, apparatus, equipment and the medium of image synthesis and the training of image synthetic model |
CN109508678A (en) * | 2018-11-16 | 2019-03-22 | 广州市百果园信息技术有限公司 | Training method, the detection method and device of face key point of Face datection model |
-
2019
- 2019-04-29 CN CN201910356945.XA patent/CN110083243A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130343611A1 (en) * | 2011-03-04 | 2013-12-26 | Hewlett-Packard Development Company, L.P. | Gestural interaction identification |
CN104750252A (en) * | 2015-03-09 | 2015-07-01 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN106648078A (en) * | 2016-12-05 | 2017-05-10 | 北京光年无限科技有限公司 | Multimode interaction method and system applied to intelligent robot |
CN107688779A (en) * | 2017-08-18 | 2018-02-13 | 北京航空航天大学 | A kind of robot gesture interaction method and apparatus based on RGBD camera depth images |
CN108594997A (en) * | 2018-04-16 | 2018-09-28 | 腾讯科技(深圳)有限公司 | Gesture framework construction method, apparatus, equipment and storage medium |
CN109508678A (en) * | 2018-11-16 | 2019-03-22 | 广州市百果园信息技术有限公司 | Training method, the detection method and device of face key point of Face datection model |
CN109472764A (en) * | 2018-11-29 | 2019-03-15 | 广州市百果园信息技术有限公司 | Method, apparatus, equipment and the medium of image synthesis and the training of image synthetic model |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110434853A (en) * | 2019-08-05 | 2019-11-12 | 北京云迹科技有限公司 | A kind of robot control method, device and storage medium |
CN110434853B (en) * | 2019-08-05 | 2021-05-14 | 北京云迹科技有限公司 | Robot control method, device and storage medium |
CN111126279A (en) * | 2019-12-24 | 2020-05-08 | 深圳市优必选科技股份有限公司 | Gesture interaction method and gesture interaction device |
CN111126279B (en) * | 2019-12-24 | 2024-04-16 | 深圳市优必选科技股份有限公司 | Gesture interaction method and gesture interaction device |
CN113534944A (en) * | 2020-04-13 | 2021-10-22 | 百度在线网络技术(北京)有限公司 | Service feedback method, service feedback device, electronic equipment and storage medium |
CN111949134A (en) * | 2020-08-28 | 2020-11-17 | 深圳Tcl数字技术有限公司 | Human-computer interaction method, device and computer-readable storage medium |
CN112306235A (en) * | 2020-09-25 | 2021-02-02 | 北京字节跳动网络技术有限公司 | Gesture operation method, device, equipment and storage medium |
CN112306235B (en) * | 2020-09-25 | 2023-12-29 | 北京字节跳动网络技术有限公司 | Gesture operation method, device, equipment and storage medium |
CN114419694A (en) * | 2021-12-21 | 2022-04-29 | 珠海视熙科技有限公司 | Processing method and processing device for head portrait of multi-person video conference |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110083243A (en) | Exchange method, device, robot and readable storage medium storing program for executing based on camera | |
US9898647B2 (en) | Systems and methods for detecting, identifying and tracking objects and events over time | |
CN107820020A (en) | Method of adjustment, device, storage medium and the mobile terminal of acquisition parameters | |
CN108197618B (en) | Method and device for generating human face detection model | |
CN109983759A (en) | The system and method adjusted for fast video acquisition and sensor | |
CN109308469B (en) | Method and apparatus for generating information | |
CN109087376B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN111582116B (en) | Video erasing trace detection method, device, equipment and storage medium | |
CN109167910A (en) | focusing method, mobile terminal and computer readable storage medium | |
CN110321863A (en) | Age recognition methods and device, storage medium | |
US20200412864A1 (en) | Modular camera interface | |
KR102467015B1 (en) | Explore media collections using opt-out interstitial | |
CN110298212B (en) | Model training method, emotion recognition method, expression display method and related equipment | |
KR20230022232A (en) | Machine Learning in Augmented Reality Content Items | |
CN109033935B (en) | Head-up line detection method and device | |
CN107704514A (en) | A kind of photo management method, device and computer-readable recording medium | |
CN102857685A (en) | Image capturing method and image capturing system | |
CN109271929B (en) | Detection method and device | |
CN109241921A (en) | Method and apparatus for detecting face key point | |
CN110298327A (en) | A kind of visual effect processing method and processing device, storage medium and terminal | |
US20230091214A1 (en) | Augmented reality items based on scan | |
CN107679532B (en) | Data transmission method, device, mobile terminal and computer readable storage medium | |
CN109739414A (en) | A kind of image processing method, mobile terminal, computer readable storage medium | |
CN114282587A (en) | Data processing method and device, computer equipment and storage medium | |
CN108537165A (en) | Method and apparatus for determining information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190802 |
|
RJ01 | Rejection of invention patent application after publication |