CN109725703A - Method, equipment and the computer of human-computer interaction can storage mediums - Google Patents

Method, equipment and the computer of human-computer interaction can storage mediums Download PDF

Info

Publication number
CN109725703A
CN109725703A CN201711025265.7A CN201711025265A CN109725703A CN 109725703 A CN109725703 A CN 109725703A CN 201711025265 A CN201711025265 A CN 201711025265A CN 109725703 A CN109725703 A CN 109725703A
Authority
CN
China
Prior art keywords
hand
virtual
change
shape
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201711025265.7A
Other languages
Chinese (zh)
Inventor
岳培锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201711025265.7A priority Critical patent/CN109725703A/en
Publication of CN109725703A publication Critical patent/CN109725703A/en
Withdrawn legal-status Critical Current

Links

Abstract

The invention discloses a kind of methods of human-computer interaction, comprising: when identifying the hand in user images according to marker, extracts the characteristic information of hand;Position of the virtual hand in virtual coordinate system is converted by position of institute's hand in actual coordinates, and exports virtual hand and virtual coordinate system;Wherein, actual coordinates are as follows: the three-dimensional system of coordinate on the basis of camera;Virtual coordinate system are as follows: the three-dimensional system of coordinate after preset ratio is reduced on the basis of actual coordinates;Virtual hand is generated by the characteristic information of hand;Change in shape and motion profile of the virtual hand in virtual coordinate system are obtained according to change in shape of the hand in actual coordinates and motion profile;According to the corresponding relationship of the change in shape of default hand and operational order, the corresponding operational order of change in shape in one's hands is obtained;Content object is handled according to operational order, and exports processing result.The present invention also discloses a kind of equipment of human-computer interaction and computer readable storage mediums.

Description

Method, equipment and the computer of human-computer interaction can storage mediums
Technical field
The present invention relates to enhancings to realize (Augmented Reality, AR) technical field more particularly to a kind of human-computer interaction Method, equipment and computer can storage medium.
Background technique
With the continuous heating in the market AR, AR technology gradually comes into the public visual field, and AR technology is used for game, Text region Etc. multinomial field, the traditional media of intense impact;However, AR technology is generally used on mobile phone or tablet computer, the small continuation of the journey of screen These drawbacks such as scarce capacity constrain the development range of AR technology;Set-top box is still the media center of most of home at present, It carries the public interaction with broadcasting and TV, and provides the program of a large amount of high definition high quality for user.
Existing set-top box is all often conventional set-top box, and user has to carry out by way of remote controler or key Interaction, this control mode is dull, cannot achieve touching manipulation, therefore, how to realize that AR technology is mutually tied with set-top box It closes, is current urgent problem to be solved.
Summary of the invention
In view of this, an embodiment of the present invention is intended to provide a kind of method of human-computer interaction, equipment and computers can store Jie Matter is be combined with each other using AR technology and local interface to realize the control for receiving user, can achieve the effect that touching manipulation, Greatly enhance the experience effect of user.
In order to achieve the above objectives, the technical scheme of the present invention is realized as follows:
The present invention provides a kind of method of human-computer interaction, which comprises
When identifying the hand in user images according to marker, the characteristic information of the hand is extracted;
Position of the virtual hand in virtual coordinate system is converted by position of the hand in actual coordinates, and exports institute State virtual hand and the virtual coordinate system;Wherein, the actual coordinates are as follows: the three-dimensional system of coordinate on the basis of camera;Institute State virtual coordinate system are as follows: the three-dimensional system of coordinate after preset ratio is reduced on the basis of the actual coordinates;The virtual hand is It is generated by the characteristic information of the hand;
The virtual hand is obtained described according to change in shape of the hand in the actual coordinates and motion profile Change in shape and motion profile in virtual coordinate system;
According to the corresponding relationship of the change in shape of default hand and operational order, the corresponding behaviour of change in shape of the hand is obtained It instructs;
Content object is handled according to the operational order, and exports processing result.
In above scheme, before the characteristic information for extracting the hand, the method also includes:
The user images are obtained by the camera;
Judge the hand in the user images whether is identified according to the marker;Wherein, the marker packet Include: the anchor point that hand is fastened in three-dimensional coordinate, the anchor point is located on five fingers of the hand and palm Around.
In above scheme, obtained in the change in shape according to the hand in the actual coordinates and motion profile The virtual hand in the virtual coordinate system change in shape and motion profile before, the method also includes:
By background subtraction, inter-frame difference and optical flow method combine in the way of obtain the hand in the actual coordinates In change in shape and motion profile.
In above scheme, obtained in the change in shape according to the hand in the actual coordinates and motion profile The virtual hand in the virtual coordinate system change in shape and motion profile before, the method also includes:
The performance characteristic of each frame of the hand got according to the camera predicts that the hand practical is sat described Change in shape and motion profile in mark system.
In above scheme, the performance characteristic includes: geometrical characteristic, statistical nature, transform domain feature and color characteristic;
The geometrical characteristic includes: perimeter, area, ellipticity and height;
The statistical nature includes: gray average and variance, histogram, entropy, square and the contrast relative to background;
The transform domain feature includes: that Fourier Fourier transform domain, Gabor functional transformation domain and small echo Wavelet become Change domain.
In above scheme, the output virtual hand and the virtual coordinate system, comprising:
By high-definition multimedia interface HDMI or Composite Video Baseband Signal CVBS to described in display unit output Virtual hand and the virtual coordinate system;
The output processing result, comprising:
The processing result is exported to display unit by HDMI or CVBS.
The present invention also provides a kind of equipment of human-computer interaction, the equipment includes: interface, bus, memory, with processing Device, the interface, memory are connected with processor by the bus, and the memory is for storing executable program, institute Processor is stated to be configured as running the executable program realization following steps:
When identifying the hand in user images according to marker, the characteristic information of the hand is extracted;
Position of the virtual hand in virtual coordinate system is converted by position of the hand in actual coordinates, and exports institute State virtual hand and the virtual coordinate system;Wherein, the actual coordinates are as follows: the three-dimensional system of coordinate on the basis of camera;Institute State virtual coordinate system are as follows: the three-dimensional system of coordinate after preset ratio is reduced on the basis of the actual coordinates;The virtual hand is It is generated by the characteristic information of the hand;
The virtual hand is obtained described according to change in shape of the hand in the actual coordinates and motion profile Change in shape and motion profile in virtual coordinate system;
According to the corresponding relationship of the change in shape of default hand and operational order, the corresponding behaviour of change in shape of the hand is obtained It instructs;
Content object is handled according to the operational order, and exports processing result.
In above scheme, the processor is additionally configured to run the executable program realization following steps:
The user images are obtained by the camera;
Judge the hand in the user images whether is identified according to the marker;Wherein, the marker packet Include: the anchor point that hand is fastened in three-dimensional coordinate, the anchor point is located on five fingers of the hand and palm Around.
In above scheme, the processor is additionally configured to run the executable program realization following steps:
By background subtraction, inter-frame difference and optical flow method combine in the way of obtain the hand in the actual coordinates In change in shape and motion profile.
In above scheme, the processor is additionally configured to run the executable program realization following steps:
The performance characteristic of each frame of the hand got according to the camera predicts that the hand practical is sat described Change in shape and motion profile in mark system.
In above scheme, the performance characteristic includes: geometrical characteristic, statistical nature, transform domain feature and color characteristic;
The geometrical characteristic includes: perimeter, area, ellipticity and height;
The statistical nature includes: gray average and variance, histogram, entropy, square and the contrast relative to background;
The transform domain feature includes: that Fourier Fourier transform domain, Gabor functional transformation domain and small echo Wavelet become Change domain.
In above scheme, the processor is configured to running the executable program is implemented as follows step:
By high-definition multimedia interface HDMI or Composite Video Baseband Signal CVBS to described in display unit output Virtual hand and the virtual coordinate system;
The processor is configured to running the executable program is implemented as follows step:
The processing result is exported to display unit by HDMI or CVBS.
The present invention also provides a kind of computer readable storage medium, the computer-readable recording medium storage has program, Described program can be executed by processor, to perform the steps of
When identifying the hand in user images according to marker, the characteristic information of the hand is extracted;
Position of the virtual hand in virtual coordinate system is converted by position of the hand in actual coordinates, and exports institute State virtual hand and the virtual coordinate system;Wherein, the actual coordinates are as follows: the three-dimensional system of coordinate on the basis of camera;Institute State virtual coordinate system are as follows: the three-dimensional system of coordinate after preset ratio is reduced on the basis of the actual coordinates;The virtual hand is It is generated by the characteristic information of the hand;
The virtual hand is obtained described according to change in shape of the hand in the actual coordinates and motion profile Change in shape and motion profile in virtual coordinate system;
According to the corresponding relationship of the change in shape of default hand and operational order, the corresponding behaviour of change in shape of the hand is obtained It instructs;
Content object is handled according to the operational order, and exports processing result.
Method, equipment and the computer of human-computer interaction provided by the invention can storage medium, by knowing according to marker Not Chu hand in user images when, extract the characteristic information of hand;Virtual hand is converted by position of the hand in actual coordinates to exist Position in virtual coordinate system, and export virtual hand and virtual coordinate system;According to change in shape of the hand in actual coordinates and Motion profile obtains change in shape and motion profile of the virtual hand in virtual coordinate system;According to the change in shape of default hand and behaviour Make the corresponding relationship instructed, obtains the corresponding operational order of change in shape in one's hands;According to operational order to content object at Reason, and export processing result;Using the solution of the present invention, the change in shape and motion profile of the hand of user and hand are presented in real time On the display unit, and the operational order according to corresponding to the change in shape of hand handles content object, realizes reception The control of user is be combined with each other using AR technology and local interface, can achieve the effect that touching manipulation, greatly strengthen user Experience effect.
Detailed description of the invention
Fig. 1 is the flow chart of the embodiment of the method one of human-computer interaction of the present invention;
Fig. 2 is the flow chart of the embodiment of the method two of human-computer interaction of the present invention;
Fig. 3 is the schematic diagram for identifying the hand in user images of the embodiment of the method two of human-computer interaction of the present invention;
Fig. 4 is the schematic diagram of the scene one of the embodiment of the method two of human-computer interaction of the present invention;
Fig. 5 is the schematic diagram of the scene three of the embodiment of the method two of human-computer interaction of the present invention;
Fig. 6 is the structural schematic diagram of the Installation practice of human-computer interaction of the present invention;
Fig. 7 is the structural schematic diagram of the apparatus embodiments of human-computer interaction of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description.
Fig. 1 is the flow chart of the embodiment of the method one of human-computer interaction of the present invention, as shown in Figure 1, the embodiment of the present invention provides The method of human-computer interaction can apply on the device (hereinafter referred to as device) of human-computer interaction, the device can for set-top box, TV or projector etc..
Step 101, when identifying the hand in user images according to marker, extract the characteristic information of hand.
Device can extract the characteristic information of the hand when identifying the hand in user images according to marker;Wherein, User images are acquired by the local camera being arranged on device, the characteristic information of hand include but is not limited to size, At least one of in the information such as shape or skin color.
Position of the hand in actual coordinates is converted position of the virtual hand in virtual coordinate system by step 102, and defeated Virtual hand and virtual coordinate system out.
Device converts virtual hand for hand using the characteristic information of hand after extracting characteristic information in one's hands, and simultaneously will The position at place of the hand in actual coordinates is converted into the position in virtual coordinate system, wherein actual coordinates are as follows: to take the photograph As the three-dimensional system of coordinate on the basis of head, i.e., the three-dimensional system of coordinate that is formed using camera as coordinate origin;Virtual coordinate system are as follows: with reality The three-dimensional system of coordinate after preset ratio is reduced on the basis of the coordinate system of border, preset ratio can carry out adaptive according to the actual situation Adjustment a, for example, preset ratio can be automatically generated after the length and width for being aware of display unit.
After determining virtual hand and virtual coordinate system, virtual hand and virtual coordinate system are output in display unit, supplied Display unit shows in real time.
Step 103 obtains virtual hand in virtual coordinates according to change in shape of the hand in actual coordinates and motion profile Change in shape and motion profile in system.
Device real-time tracking and after obtaining the change in shape in one's hands in actual coordinates and motion profile, exists according to hand Change in shape and motion profile in actual coordinates obtain change in shape and motion profile of the virtual hand in virtual coordinate system, Later, the change in shape by virtual hand in virtual coordinate system and motion profile are output in display unit, for display unit reality When show.
Step 104, basis preset the change in shape of hand and the corresponding relationship of operational order, and it is corresponding to obtain change in shape in one's hands Operational order.
Device, can be in real time according to pair for the change in shape and operational order for presetting hand after obtaining change in shape in one's hands The corresponding operational order of the change in shape that should be related in one's hands;Wherein, the change in shape pass corresponding with operational order of hand is preset System can be the form of list, table or database etc., be previously stored in a device, can be set according to the actual situation It sets, it is without restriction herein;For example, single give directions the operational order for hitting and corresponding to click, Dan Zhidian hits and slides and corresponds to drag Operational order, two fingers correspond to operational order etc. of amplification apart from ascending stretching.
Step 105 handles content object according to operational order, and exports processing result.
Device is handled according to corresponding operational order content object and is obtained processing result, later ties the processing Fruit is showed by display unit.
In embodiments of the present invention, if device is set-top box, display unit can be smart television, projector Or the display in the large-size screen monitors application scenarios that directly contact of users' inconvenience such as virtual reality technology (Virtual Reality, VR) Equipment combines AR technology with set top box interactive, and it is dull to be greatly improved and improve conventional set-top box human-computer interaction, together When operating difficulties the drawbacks of.
The method of human-computer interaction provided in an embodiment of the present invention, by identifying the hand in user images according to marker When, extract the characteristic information of hand;Position of the virtual hand in virtual coordinate system is converted by position of the hand in actual coordinates, And export virtual hand and virtual coordinate system;It obtains virtual hand according to change in shape of the hand in actual coordinates and motion profile and exists Change in shape and motion profile in virtual coordinate system;According to the corresponding relationship of the change in shape of default hand and operational order, obtain The corresponding operational order of change in shape in one's hands;Content object is handled according to operational order, and exports processing result;Benefit With the solution of the present invention, the change in shape and motion profile of the hand of user and hand are presented on the display unit in real time, and according to Operational order corresponding to the change in shape of hand handles content object, the control for receiving user is realized, with this boundary of a piece of land Face be combined with each other, and can achieve the effect that touching manipulation, greatly strengthens the experience effect of user.
It is on the basis of the above embodiments, further to illustrate in order to more embody the purpose of the present invention.
Fig. 2 is the flow chart of the embodiment of the method two of human-computer interaction of the present invention, as shown in Fig. 2, the embodiment of the present invention provides The method of human-computer interaction apply on the set-top box with camera, this method may include steps of:
Step 201 obtains user images by camera.
Set-top box obtains the user images at current time by local camera.
Step 202 judges the hand in user images whether is identified according to marker.
Set-top box judges whether identify user images according to marker after getting the user images at current time In hand, if according to marker it is unidentified go out user images in hand, then follow the steps 203;If identifying use according to marker Hand in the image of family, thens follow the steps 204.
Wherein, the marker anchor point that include: left hand fasten in three-dimensional coordinate and the positioning that the right hand is fastened in three-dimensional coordinate Point, anchor point are located on five fingers of left hand and around palm or on five fingers of the right hand and hand Around the palm.
Specifically, marker is similar with a cushion, for identifying the position of virtual image in video, marker is to use The image recognition feature extraction point of the both hands at family is located at ten fingers and palm front and back and three coordinates up and down In eight space quadrants of segmentation, their spatial position can be (0,0,1) with coordinate representation, (0,1,0), (1,0,0), (0, 0, -1), (0, -1,0), (- 1,0,0), (1,1,1), (1,1, -1), (1, -1,1), (1, -1, -1), (- 1,1,1), (- 1,1, - 1), (- 1, -1,1), (- 1, -1, -1) etc..
For example, Fig. 3 is the schematic diagram for identifying the hand in user images of the embodiment of the method two of human-computer interaction of the present invention, As shown in figure 3, set-top box after getting the user images at current time, understands the anchor point according to marker in spatial position Come judge determine user images in hand whether meet, the hand in user images meet marker after, be determined to identify Hand in user images out;Because left hand is identical as the judgement principle of the right hand, only lifted in Fig. 3 with the marker of left hand Example explanation.
Step 203, return step 201.
Set-top box return step 201 continues the user images for obtaining subsequent time.
Step 204, the characteristic information for extracting hand.
Set-top box extracts the characteristic information of hand according to the anchor point of marker;Specifically, as shown in figure 3, according to each hand Anchor point around finger and palm obtains corresponding three-dimensional coordinates matrix aggregate list;If foreground object interior intensity value point Cloth is relatively uniform, and the distribution of background gray levels is also relatively uniform, the histogram of this image will have it is apparent bimodal, at this moment can be with It selects the lowest point between two peaks as threshold value, since histogram is free of the location information of target, also to come in conjunction with the content of image Determine corresponding position;Convolution algorithm is done by the pixel to corresponding position, obtains size in one's hands, shape and skin color etc. Information.
Position of the hand in actual coordinates is converted position of the virtual hand in virtual coordinate system by step 205, and defeated Virtual hand and virtual coordinate system out.
Device converts virtual hand for hand using the characteristic information of hand after extracting characteristic information in one's hands, and simultaneously will The position at place of the hand in actual coordinates is converted into the position in virtual coordinate system, wherein actual coordinates are as follows: to take the photograph As the three-dimensional system of coordinate on the basis of head, i.e., the three-dimensional system of coordinate that is formed using camera as coordinate origin;Virtual coordinate system are as follows: with reality The three-dimensional system of coordinate after preset ratio is reduced on the basis of the coordinate system of border, preset ratio can carry out adaptive according to the actual situation Adjustment a, for example, preset ratio can be automatically generated after the length and width for being aware of display unit.
By high-definition multimedia interface (High Definition Multimedia Interface, HDMI) or Composite Video Baseband Signal (Composite Video Broadcast Signal, CVBS) to smart television export virtual hand and Virtual hand is fastened in virtual coordinates and is showed by virtual coordinate system, smart television.
Step 206, the hand got according to camera each frame performance characteristic, predict hand in actual coordinates Change in shape and motion profile.
The performance characteristic of each frame for the hand that set-top box is got according to camera predicts shape of the hand in actual coordinates Shape variation and motion profile;Performance characteristic includes: geometrical characteristic, statistical nature, transform domain feature and color characteristic;Wherein, several What feature includes: perimeter, area, ellipticity and height;Statistical nature includes: gray average and variance, histogram, entropy, square and phase For the contrast of background;Transform domain feature includes: Fourier (Fourier) transform domain, Gabor functional transformation domain and small echo (Wavelet) transform domain.
In order to reduce the matched region of search characteristics, real-time is improved, the change in shape and movement rail of opponent are added herein The step for mark is predicted;Change in shape and motion profile prediction are also beneficial to the robustness tracked under enhancing circumstance of occlusion;Geometry Feature, what it reflected is the geometric properties of target, it is only related with the position of target pixel points, and unrelated with its gray scale, is commonly used Geometrical characteristic have target perimeter, area, ellipticity and height etc.;Statistical nature, as target gray mean value and variance, histogram, The contrast etc. of entropy, square and target relative to background;Transform domain feature, the transform domains such as including Forier, Gabor, Wavelet Feature.
Step 207, by background subtraction, inter-frame difference and optical flow method combine in the way of obtain hand in actual coordinates In change in shape and motion profile.
Set-top box extracts the characteristic information sold after the target that segmentation is sold, and then matches in next frame image special Reference breath is to track target;Specifically, by background subtraction, inter-frame difference and optical flow method combine in the way of obtain hand and exist Change in shape and motion profile in actual coordinates.
Step 208 obtains virtual hand in virtual coordinates according to change in shape of the hand in actual coordinates and motion profile Change in shape and motion profile in system.
Set-top box real-time tracking and after obtaining the change in shape in one's hands in actual coordinates and motion profile, according to hand Change in shape and motion profile in actual coordinates obtain change in shape and movement rail of the virtual hand in virtual coordinate system Mark, later, by change in shape of the virtual hand in virtual coordinate system and motion profile by HDMI or CVBS to smart television The change in shape of virtual hand and motion profile are fastened in virtual coordinates and are showed by output, smart television.
Step 209, basis preset the change in shape of hand and the corresponding relationship of operational order, and it is corresponding to obtain change in shape in one's hands Operational order.
Set-top box, can be in real time according to the change in shape and operational order for presetting hand after obtaining change in shape in one's hands Corresponding relationship obtains the corresponding operational order of change in shape in one's hands;Wherein, the change in shape for presetting hand is corresponding with operational order Relationship can be the form of list, table or database etc., be previously stored in a device, can be set according to the actual situation It sets, it is without restriction herein.
Step 210 handles content object according to operational order, and exports processing result.
Set-top box is handled according to corresponding operational order content object and is obtained processing result, later by the processing As a result it is exported by HDMI or CVBS to smart television, processing result is presented for smart television.
The following are several scenes:
Scene one
Fig. 4 is the schematic diagram of the scene one of the embodiment of the method two of human-computer interaction of the present invention, as shown in figure 4,405 be machine top Box video program being played on, 401,402 be corresponding program listing, and 403 be program pages (PF) item, and 404 obtain for camera The hand of the user taken projects to the virtual hand on smart television, and 404 can follow the movement of user gesture and movement variation and change Become;For example, 404 switching channels when sliding up and down relative to screen, change volume when horizontally slipping, experience exists such as true hand The operation of sliding is touched on tablet computer.
Scene two
As shown in figure 4, the system of set-top box can also increase the extension to gesture operational motion, user can permit just It can be rolled into a ball when arranging card and equally carry out free assembled arrangement to 401 and 402, while deleting with sensitive paper and equally be kneaded into one Group's pop-up wastepaper basket investment, increases the entertaining enjoyment of operation.
Scene three
Fig. 5 is the schematic diagram of the scene three of the embodiment of the method two of human-computer interaction of the present invention, as shown in figure 5, set-top box system System increases the function of simulated gravity sensor;In Fig. 5 when the both hands of user are at steering-wheel type movement is held, system can be with Track both hands direct range carries out the identification identification of corresponding gravity sensitive mode marker, when distance and both hands shape meet, Game simultaneously starts the translative mode of simulated gravity sensor when also starting gravity sensor module, 501 regions are according to both hands Rotation angle, be converted to the Games Software that corresponding gravity sensitive data provide, carry out such as racing game, direction is left Right control, the support of rotation drift etc..
Scene four
Set-top-box system increases physically sensitive mode, is switched to physically sensitive mode, user interface when entering the game of body-sensing class (User Interface, UI) can draw out corresponding somatosensory operation object, for example, shuttlecock is arranged, handle is held in manpower operation, System lock hand-type, the then mobile corresponding marker in system tracking manpower position, the corresponding coordinate of converting transmission body-sensing Data realize the control and experience of the game of body-sensing class to scene of game.
Scene five
Set-top-box system increases picture video class zoom mode, process of the user in browsing pictures or viewing video In, both hands can be used, corresponding zoom movement is carried out to photo, system can identify between corresponding thumb and middle finger Zoom thing converts corresponding operation, to the corresponding zoom movement of corresponding picture browsing application execution;It is watching During video, corresponding region screenshotss can be carried out by the plotting motion of finger, then carry out the diminution of corresponding picture The operation of amplification.
The method of human-computer interaction provided in an embodiment of the present invention obtains user images by camera, according to marker When identifying the hand in user images, the characteristic information of hand is extracted;Virtual hand is converted by position of the hand in actual coordinates Position in virtual coordinate system, and export virtual hand and virtual coordinate system;Each frame of the hand got according to camera Performance characteristic predicts change in shape and motion profile of the hand in actual coordinates;Utilize background subtraction, inter-frame difference and light The mode that stream method combines obtains change in shape and motion profile of the hand in actual coordinates;According to hand in actual coordinates Change in shape and motion profile obtain change in shape and motion profile of the virtual hand in virtual coordinate system;According to default hand The corresponding relationship of change in shape and operational order obtains the corresponding operational order of change in shape in one's hands;It is internal according to operational order Hold object to be handled, and exports processing result;The technical solution provided through the embodiment of the present invention is compared with traditional scheme, On the set-top box by AR technological incorporation, set-top box, which only needs to lead to camera, can receive the control of user, with local interface phase In conjunction with can achieve the effect that touching manipulation, greatly strengthen the experience effect of user;Secondly, additional being set without using other It is standby, can extreme enrichment action event customization, enhance the effect of interaction, use the real-time display of actual human hand, greatly enhance The presence of virtual portrait.
Fig. 6 is the structural schematic diagram of the Installation practice of human-computer interaction of the present invention, as shown in fig. 6, the embodiment of the present invention mentions The device 06 of the human-computer interaction of confession, comprising:
Extraction module 61, the feature for when identifying the hand in user images according to marker, extracting the hand are believed Breath;
Conversion module 62, for converting virtual hand in virtual coordinate system for position of the hand in actual coordinates Position;Wherein, the actual coordinates are as follows: the three-dimensional system of coordinate on the basis of camera;The virtual coordinate system are as follows: with institute It states and reduces the three-dimensional system of coordinate after preset ratio on the basis of actual coordinates;The virtual hand is raw by the characteristic information of the hand At;
Output module 63, for exporting the virtual hand and the virtual coordinate system;
The conversion module 62 is also used to change in shape and motion profile according to the hand in the actual coordinates Obtain change in shape and motion profile of the virtual hand in the virtual coordinate system;
Respective modules 64 obtain the hand for the corresponding relationship according to the change in shape and operational order for presetting hand The corresponding operational order of change in shape;
Processing module 65, for being handled according to the operational order content object;
The output module 63, is also used to export processing result.
Further, described device further include:
Image collection module 66, for obtaining the user images by the camera;
Judgment module 67, for judging whether identify the hand in the user images according to the marker;Its In, the marker includes: the anchor point that hand is fastened in three-dimensional coordinate, and the anchor point is located at five fingers of the hand On head and around palm.
Further, described device further include:
Track obtain module 68, for by background subtraction, inter-frame difference and optical flow method combine in the way of obtain institute State motion profile of the hand in the actual coordinates.
Further, described device further include:
Prediction module 69, the performance characteristic of each frame of the hand for being got according to the camera predict institute State motion profile of the hand in the actual coordinates.
Further, the performance characteristic includes: geometrical characteristic, statistical nature, transform domain feature and color characteristic;
The geometrical characteristic includes: perimeter, area, ellipticity and height;
The statistical nature includes: gray average and variance, histogram, entropy, square and the contrast relative to background;
The transform domain feature includes: that Fourier Fourier transform domain, Gabor functional transformation domain and small echo Wavelet become Change domain.
Further, the output module 63 is specifically used for passing through high-definition multimedia interface HDMI or compound view Frequency broadcast singal CVBS exports the virtual hand and the virtual coordinate system to display unit;
The output module 63 exports the processing result to display unit also particularly useful for by HDMI or CVBS.
The device of the present embodiment, can be used for executing it is above-mentioned shown in embodiment of the method technical solution, realization principle and Technical effect is similar, and details are not described herein again.
Fig. 7 is the structural schematic diagram of the apparatus embodiments of human-computer interaction of the present invention, as shown in fig. 7, the embodiment of the present invention mentions The equipment 07 of the human-computer interaction of confession includes: interface 71, bus 72, memory 73, with processor 74, the interface 71, memory 73 are connected with processor 74 by the bus 72, and the memory 73 is for storing executable program, the processor 74 It is configured as running the executable program realization following steps:
When identifying the hand in user images according to marker, the characteristic information of the hand is extracted;
Position of the virtual hand in virtual coordinate system is converted by position of the hand in actual coordinates, and exports institute State virtual hand and the virtual coordinate system;Wherein, the actual coordinates are as follows: the three-dimensional system of coordinate on the basis of camera;Institute State virtual coordinate system are as follows: the three-dimensional system of coordinate after preset ratio is reduced on the basis of the actual coordinates;The virtual hand is It is generated by the characteristic information of the hand;
The virtual hand is obtained described according to change in shape of the hand in the actual coordinates and motion profile Change in shape and motion profile in virtual coordinate system;
According to the corresponding relationship of the change in shape of default hand and operational order, the corresponding behaviour of change in shape of the hand is obtained It instructs;
Content object is handled according to the operational order, and exports processing result.
Further, the processor 74 is additionally configured to run the executable program realization following steps:
The user images are obtained by the camera;
Judge the hand in the user images whether is identified according to the marker;Wherein, the marker packet Include: the anchor point that hand is fastened in three-dimensional coordinate, the anchor point is located on five fingers of the hand and palm Around.
Further, the processor 74 is additionally configured to run the executable program realization following steps:
By background subtraction, inter-frame difference and optical flow method combine in the way of obtain the hand in the actual coordinates In change in shape and motion profile.
Further, the processor 74 is additionally configured to run the executable program realization following steps:
The performance characteristic of each frame of the hand got according to the camera predicts that the hand practical is sat described Change in shape and motion profile in mark system.
Further, the performance characteristic includes: geometrical characteristic, statistical nature, transform domain feature and color characteristic;
The geometrical characteristic includes: perimeter, area, ellipticity and height;
The statistical nature includes: gray average and variance, histogram, entropy, square and the contrast relative to background;
The transform domain feature includes: that Fourier Fourier transform domain, Gabor functional transformation domain and small echo Wavelet become Change domain.
Further, the processor 74, which is configured as running the executable program, is implemented as follows step:
By high-definition multimedia interface HDMI or Composite Video Baseband Signal CVBS to described in display unit output Virtual hand and the virtual coordinate system;
The processor 74, which is configured as running the executable program, is implemented as follows step:
The processing result is exported to display unit by HDMI or CVBS.
The equipment of the present embodiment, can be used for executing it is above-mentioned shown in embodiment of the method technical solution, realization principle and Technical effect is similar, and details are not described herein again.
As shown in fig. 7, the various components in the equipment 07 of human-computer interaction are coupled by bus 72;It is understood that total For line 72 for realizing the connection communication between these components, bus 72 further includes power bus, control in addition to including data/address bus Bus and status signal bus in addition processed, but for the sake of clear explanation, various buses are all designated as bus 72 in Fig. 7.
Wherein, interface 71 may include display, keyboard, mouse, trace ball, click wheel, key, button, touch-sensitive plate or Person's touch screen etc..
It is appreciated that memory 73 can be volatile memory or nonvolatile memory, may also comprise volatibility and Both nonvolatile memories.Wherein, nonvolatile memory can be read-only memory (ROM, Read Only Memory), Programmable read only memory (PROM, Programmable Read-Only Memory), Erasable Programmable Read Only Memory EPROM (EPROM, Erasable Programmable Read-Only Memory), electrically erasable programmable read-only memory The storage of (EEPROM, Electrically Erasable Programmable Read-Only Memory), magnetic random access Device (FRAM, ferromagnetic random access memory), flash memory (Flash Memory), magnetic surface are deposited Reservoir, CD or CD-ROM (CD-ROM, Compact Disc Read-Only Memory);Magnetic surface storage can be Magnetic disk storage or magnetic tape storage.Volatile memory can be random access memory (RAM, Random Access Memory), it is used as External Cache.By exemplary but be not restricted explanation, the RAM of many forms is available, such as Static random access memory (SRAM, Static Random Access Memory), synchronous static random access memory (SSRAM, Synchronous Static Random Access Memory), dynamic random access memory (DRAM, Dynamic Random Access Memory), Synchronous Dynamic Random Access Memory (SDRAM, Synchronous Dynamic Random Access Memory), double data speed synchronous dynamic RAM (DDRSDRAM, Double Data Rate Synchronous Dynamic Random Access Memory), enhanced synchronous dynamic random Access memory (ESDRAM, Enhanced Synchronous Dynamic Random Access Memory), synchronized links Dynamic random access memory (SLDRAM, SyncLink Dynamic Random Access Memory), direct rambus Random access memory (DRRAM, Direct Rambus Random Access Memory);Description of the embodiment of the present invention is deposited Reservoir 73 is intended to include but is not limited to the memory of these and any other suitable type.
Memory 73 in the embodiment of the present invention is for storing various types of data to support the equipment 07 of human-computer interaction Operation, the example of these data includes: any computer program for operating in the equipment 07 of human-computer interaction, such as operate System and application program etc., wherein operating system includes various system programs, such as ccf layer, core library layer, driving layer etc., For realizing various basic businesses and the hardware based task of processing;Application program may include various application programs, such as Media player (Media Player), browser (Browser) etc. realize that the present invention is real for realizing various applied business The program for applying a method may include in the application.
The method that the embodiments of the present invention disclose can be applied in processor 74, or be realized by processor 74;Place Managing device 74 may be a kind of IC chip, the processing capacity with signal;During realization, each step of the above method It can be completed by the integrated logic circuit of the hardware in processor 74 or the instruction of software form, above-mentioned processor 74 can Be general processor, digital signal processor (DSP, Digital Signal Processor) or other programmable patrol Collect device, discrete gate or transistor logic, discrete hardware components etc.;Processor 74 may be implemented or execute the present invention Disclosed each method, step and logic diagram in embodiment;General processor can be microprocessor or any conventional Processor etc.;The step of method in conjunction with disclosed in the embodiment of the present invention, can be embodied directly in hardware decoding processor execution Complete, or in decoding processor hardware and software module combine execute completion;Software module can be located at storage medium In, which is located at memory 73, and processor 74 reads the information in memory 73, completes preceding method in conjunction with its hardware The step of.
In the exemplary embodiment, the equipment 07 of human-computer interaction can be by one or more application specific integrated circuit (ASIC, Application Specific Integrated Circuit), DSP, programmable logic device (PLD, Programmable Logic Device), Complex Programmable Logic Devices (CPLD, Complex Programmable Logic Device), field programmable gate array (FPGA, Field-Programmable Gate Array), general processor, control Device, microcontroller (MCU, Micro Controller Unit), microprocessor (Microprocessor) or other electronics member Part is realized, for executing preceding method.
The embodiment of the present invention also provides a kind of computer readable storage medium, and the computer readable storage medium can be The memories such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface storage, CD or CD-ROM, It can be the various equipment including one of above-mentioned memory or any combination;The computer-readable recording medium storage has journey Sequence, described program can be executed by processor, to perform the steps of
When identifying the hand in user images according to marker, the characteristic information of the hand is extracted;
Position of the virtual hand in virtual coordinate system is converted by position of the hand in actual coordinates, and exports institute State virtual hand and the virtual coordinate system;Wherein, the actual coordinates are as follows: the three-dimensional system of coordinate on the basis of camera;Institute State virtual coordinate system are as follows: the three-dimensional system of coordinate after preset ratio is reduced on the basis of the actual coordinates;The virtual hand is It is generated by the characteristic information of the hand;
The virtual hand is converted described by change in shape of the hand in the actual coordinates and motion profile Change in shape and motion profile in virtual coordinate system, and export change in shape of the virtual hand in the virtual coordinate system And motion profile;
According to the corresponding relationship of the change in shape of default hand and operational order, the corresponding behaviour of change in shape of the hand is obtained It instructs;
Content object is handled according to the operational order, obtains processing result, and exports the processing result.
Further, described program can also be executed by the processor, to perform the steps of
The user images are obtained by the camera;
Judge the hand in the user images whether is identified according to the marker;Wherein, the marker packet Include: the anchor point that the anchor point and the right hand that left hand is fastened in three-dimensional coordinate are fastened in three-dimensional coordinate, the anchor point are located at On five fingers of the left hand or the right hand and around palm.
Further, described program can also be executed by the processor, to perform the steps of
By background subtraction, inter-frame difference and optical flow method combine in the way of obtain the hand in the actual coordinates In change in shape and motion profile.
Further, described program can also be executed by the processor, to perform the steps of
The performance characteristic of each frame of the hand got according to the camera predicts that the hand practical is sat described Change in shape and motion profile in mark system.
Further, the performance characteristic includes: geometrical characteristic, statistical nature, transform domain feature and color characteristic;
The geometrical characteristic includes: perimeter, area, ellipticity and height;
The statistical nature includes: gray average and variance, histogram, entropy, square and the contrast relative to background;
The transform domain feature includes: that Fourier Fourier transform domain, Gabor functional transformation domain and small echo Wavelet become Change domain.
Further, described program can be executed by the processor, to implement following steps:
By high-definition multimedia interface HDMI or Composite Video Baseband Signal CVBS to described in display unit output Virtual hand and the virtual coordinate system;
Change in shape and motion profile of the output virtual hand in the virtual coordinate system, comprising:
Change in shape of the virtual hand in the virtual coordinate system is exported to display unit by HDMI or CVBS And motion profile;
The output processing result, comprising:
The processing result is exported to display unit by HDMI or CVBS.
The computer readable storage medium of the present embodiment can be used for executing the technical side of above-mentioned shown embodiment of the method Case, it is similar that the realization principle and technical effect are similar, and details are not described herein again.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, the shape of hardware embodiment, software implementation or embodiment combining software and hardware aspects can be used in the present invention Formula.Moreover, the present invention, which can be used, can use storage in the computer that one or more wherein includes computer usable program code The form for the computer program product implemented on medium (including but not limited to magnetic disk storage and optical memory etc.).
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention.

Claims (13)

1. a kind of method of human-computer interaction, which is characterized in that the described method includes:
When identifying the hand in user images according to marker, the characteristic information of the hand is extracted;
Position of the virtual hand in virtual coordinate system is converted by position of the hand in actual coordinates, and exports the void Quasi- hand and the virtual coordinate system;Wherein, the actual coordinates are as follows: the three-dimensional system of coordinate on the basis of camera;The void Quasi-coordinate system are as follows: the three-dimensional system of coordinate after preset ratio is reduced on the basis of the actual coordinates;The virtual hand is by institute State the characteristic information generation of hand;
The virtual hand is obtained described virtual according to change in shape of the hand in the actual coordinates and motion profile Change in shape and motion profile in coordinate system;
According to the corresponding relationship of the change in shape of default hand and operational order, the corresponding operation of change in shape for obtaining the hand refers to It enables;
Content object is handled according to the operational order, and exports processing result.
2. described the method according to claim 1, wherein before the characteristic information for extracting the hand Method further include:
The user images are obtained by the camera;
Judge the hand in the user images whether is identified according to the marker;Wherein, the marker includes: hand In the anchor point that three-dimensional coordinate is fastened, the anchor point is located on five fingers of the hand and around palm.
3. the method according to claim 1, wherein it is described according to the hand in the actual coordinates Change in shape and motion profile obtain the virtual hand in the virtual coordinate system change in shape and motion profile before, institute State method further include:
By background subtraction, inter-frame difference and optical flow method combine in the way of obtain the hand in the actual coordinates Change in shape and motion profile.
4. the method according to claim 1, wherein it is described according to the hand in the actual coordinates Change in shape and motion profile obtain the virtual hand in the virtual coordinate system change in shape and motion profile before, institute State method further include:
The performance characteristic of each frame of the hand got according to the camera, predicts the hand in the actual coordinates In change in shape and motion profile.
5. according to the method described in claim 4, it is characterized in that, the performance characteristic include: geometrical characteristic, statistical nature, Transform domain feature and color characteristic;
The geometrical characteristic includes: perimeter, area, ellipticity and height;
The statistical nature includes: gray average and variance, histogram, entropy, square and the contrast relative to background;
The transform domain feature includes: Fourier Fourier transform domain, Gabor functional transformation domain and small echo Wavelet transformation Domain.
6. the method according to claim 1, wherein the output virtual hand and the virtual coordinate system, Include:
It is described virtual to display unit output by high-definition multimedia interface HDMI or Composite Video Baseband Signal CVBS Hand and the virtual coordinate system;
The output processing result, comprising:
The processing result is exported to display unit by HDMI or CVBS.
7. a kind of equipment of human-computer interaction, which is characterized in that the equipment includes: interface, bus, memory, with processor, institute It states interface, memory to be connected with processor by the bus, the memory is for storing executable program, the processing Device is configured as running the executable program realization following steps:
When identifying the hand in user images according to marker, the characteristic information of the hand is extracted;
Position of the virtual hand in virtual coordinate system is converted by position of the hand in actual coordinates, and exports the void Quasi- hand and the virtual coordinate system;Wherein, the actual coordinates are as follows: the three-dimensional system of coordinate on the basis of camera;The void Quasi-coordinate system are as follows: the three-dimensional system of coordinate after preset ratio is reduced on the basis of the actual coordinates;The virtual hand is by institute State the characteristic information generation of hand;
The virtual hand is obtained described virtual according to change in shape of the hand in the actual coordinates and motion profile Change in shape and motion profile in coordinate system;
According to the corresponding relationship of the change in shape of default hand and operational order, the corresponding operation of change in shape for obtaining the hand refers to It enables;
Content object is handled according to the operational order, and exports processing result.
8. equipment according to claim 7, which is characterized in that the processor is additionally configured to run the executable journey Sequence realizes following steps:
The user images are obtained by the camera;
Judge the hand in the user images whether is identified according to the marker;Wherein, the marker includes: hand In the anchor point that three-dimensional coordinate is fastened, the anchor point is located on five fingers of the hand and around palm.
9. equipment according to claim 7, which is characterized in that the processor is additionally configured to run the executable journey Sequence realizes following steps:
By background subtraction, inter-frame difference and optical flow method combine in the way of obtain the hand in the actual coordinates Change in shape and motion profile.
10. equipment according to claim 7, which is characterized in that the processor is additionally configured to run described executable Program realizes following steps:
The performance characteristic of each frame of the hand got according to the camera, predicts the hand in the actual coordinates In change in shape and motion profile.
11. equipment according to claim 10, which is characterized in that the performance characteristic includes: geometrical characteristic, statistics spy Sign, transform domain feature and color characteristic;
The geometrical characteristic includes: perimeter, area, ellipticity and height;
The statistical nature includes: gray average and variance, histogram, entropy, square and the contrast relative to background;
The transform domain feature includes: Fourier Fourier transform domain, Gabor functional transformation domain and small echo Wavelet transformation Domain.
12. equipment according to claim 7, which is characterized in that the processor is configured to the operation executable journey Sequence is implemented as follows step:
It is described virtual to display unit output by high-definition multimedia interface HDMI or Composite Video Baseband Signal CVBS Hand and the virtual coordinate system;
The processor is configured to running the executable program is implemented as follows step:
The processing result is exported to display unit by HDMI or CVBS.
13. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has program, institute Stating program can be executed by processor, to perform the steps of
When identifying the hand in user images according to marker, the characteristic information of the hand is extracted;
Position of the virtual hand in virtual coordinate system is converted by position of the hand in actual coordinates, and exports the void Quasi- hand and the virtual coordinate system;Wherein, the actual coordinates are as follows: the three-dimensional system of coordinate on the basis of camera;The void Quasi-coordinate system are as follows: the three-dimensional system of coordinate after preset ratio is reduced on the basis of the actual coordinates;The virtual hand is by institute State the characteristic information generation of hand;
The virtual hand is obtained described virtual according to change in shape of the hand in the actual coordinates and motion profile Change in shape and motion profile in coordinate system;
According to the corresponding relationship of the change in shape of default hand and operational order, the corresponding operation of change in shape for obtaining the hand refers to It enables;
Content object is handled according to the operational order, and exports processing result.
CN201711025265.7A 2017-10-27 2017-10-27 Method, equipment and the computer of human-computer interaction can storage mediums Withdrawn CN109725703A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711025265.7A CN109725703A (en) 2017-10-27 2017-10-27 Method, equipment and the computer of human-computer interaction can storage mediums

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711025265.7A CN109725703A (en) 2017-10-27 2017-10-27 Method, equipment and the computer of human-computer interaction can storage mediums

Publications (1)

Publication Number Publication Date
CN109725703A true CN109725703A (en) 2019-05-07

Family

ID=66291568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711025265.7A Withdrawn CN109725703A (en) 2017-10-27 2017-10-27 Method, equipment and the computer of human-computer interaction can storage mediums

Country Status (1)

Country Link
CN (1) CN109725703A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112068699A (en) * 2020-08-31 2020-12-11 北京市商汤科技开发有限公司 Interaction method, interaction device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102200834A (en) * 2011-05-26 2011-09-28 华南理工大学 television control-oriented finger-mouse interaction method
CN103092332A (en) * 2011-11-08 2013-05-08 苏州中茵泰格科技有限公司 Digital image interactive method and system of television
US20140125584A1 (en) * 2012-11-07 2014-05-08 Samsung Electronics Co., Ltd. System and method for human computer interaction
CN103999018A (en) * 2011-12-06 2014-08-20 汤姆逊许可公司 Method and system for responding to user's selection gesture of object displayed in three dimensions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102200834A (en) * 2011-05-26 2011-09-28 华南理工大学 television control-oriented finger-mouse interaction method
CN103092332A (en) * 2011-11-08 2013-05-08 苏州中茵泰格科技有限公司 Digital image interactive method and system of television
CN103999018A (en) * 2011-12-06 2014-08-20 汤姆逊许可公司 Method and system for responding to user's selection gesture of object displayed in three dimensions
US20140125584A1 (en) * 2012-11-07 2014-05-08 Samsung Electronics Co., Ltd. System and method for human computer interaction

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112068699A (en) * 2020-08-31 2020-12-11 北京市商汤科技开发有限公司 Interaction method, interaction device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2022001593A1 (en) Video generation method and apparatus, storage medium and computer device
TWI462035B (en) Object detection metadata
WO2020108082A1 (en) Video processing method and device, electronic equipment and computer readable medium
CN106875431B (en) Image tracking method with movement prediction and augmented reality implementation method
TWI556639B (en) Techniques for adding interactive features to videos
WO2020019666A1 (en) Multiple face tracking method for facial special effect, apparatus and electronic device
WO2019237745A1 (en) Facial image processing method and apparatus, electronic device and computer readable storage medium
WO2014120554A2 (en) Systems and methods for initializing motion tracking of human hands using template matching within bounded regions
CN104583902A (en) Improved identification of a gesture
CN104508680B (en) Improved video signal is tracked
EP3238213B1 (en) Method and apparatus for generating an extrapolated image based on object detection
US10257436B1 (en) Method for using deep learning for facilitating real-time view switching and video editing on computing devices
WO2019205945A1 (en) Method and computer apparatus for determining insertion position of advertisement, and storage medium
CN111580652A (en) Control method and device for video playing, augmented reality equipment and storage medium
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
TW202141340A (en) Image processing method, electronic device and computer readable storage medium
EP2939411B1 (en) Image capture
US20230057963A1 (en) Video playing method, apparatus and device, storage medium, and program product
CN115115959A (en) Image processing method and device
KR20100103776A (en) Image processor, animation reproduction apparatus, and processing method and program for the processor and apparatus
WO2022218042A1 (en) Video processing method and apparatus, and video player, electronic device and readable medium
CN110297545B (en) Gesture control method, gesture control device and system, and storage medium
WO2020037924A1 (en) Animation generation method and apparatus
WO2020001016A1 (en) Moving image generation method and apparatus, and electronic device and computer-readable storage medium
CN109725703A (en) Method, equipment and the computer of human-computer interaction can storage mediums

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20190507