CN108241434A - Man-machine interaction method, device, medium and mobile terminal based on depth of view information - Google Patents
Man-machine interaction method, device, medium and mobile terminal based on depth of view information Download PDFInfo
- Publication number
- CN108241434A CN108241434A CN201810005036.7A CN201810005036A CN108241434A CN 108241434 A CN108241434 A CN 108241434A CN 201810005036 A CN201810005036 A CN 201810005036A CN 108241434 A CN108241434 A CN 108241434A
- Authority
- CN
- China
- Prior art keywords
- depth
- image
- face
- control
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 50
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000001815 facial effect Effects 0.000 claims abstract description 57
- 238000003860 storage Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 13
- 230000004886 head movement Effects 0.000 claims description 10
- 238000010276 construction Methods 0.000 claims description 2
- 230000004044 response Effects 0.000 claims description 2
- 230000033001 locomotion Effects 0.000 abstract description 19
- 238000001514 detection method Methods 0.000 abstract description 7
- 210000003128 head Anatomy 0.000 description 39
- 230000006870 function Effects 0.000 description 11
- 230000002093 peripheral effect Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 240000000233 Melia azedarach Species 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004020 luminiscence type Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Abstract
The embodiment of the present application discloses a kind of man-machine interaction method based on depth of view information, device, medium and mobile terminal.This method includes:When detecting that destination application starts, control 3D depth cameras obtain facial information;User Status is determined according to the facial information;Control instruction is determined, and the destination application is controlled according to the control instruction according to the User Status.Since user images have depth information, it can detect more detailed information, improve the accuracy of motion detection, it avoids the problem that application program being caused accidentally to respond because user accidentally touches, improve the accuracy and convenience of human-computer interaction, mobile terminal is allow " to see " user, improves the intelligent of human-computer interaction, enriches the application scenarios of human-computer interaction function.
Description
Technical field
The invention relates to mobile terminal technology more particularly to a kind of man-machine interaction method based on depth of view information,
Device, medium and mobile terminal.
Background technology
With the development of mobile terminal technology, the purposes of mobile terminal is no longer confined to make a phone call and photos and sending messages etc., more
The application programs such as video player, music player and electronic reader are installed in the terminal come more users, with side
Just it uses.
It is typically using manual control mode, in the use process of application program to the control of application program in the relevant technologies
In, it usually needs some simple operations of the input of user's repetition influence the convenience of human-computer interaction, and are susceptible to and miss tactile ask
Topic.
Invention content
The embodiment of the present application provides a kind of man-machine interaction method based on depth of view information, device, medium and mobile terminal, can
To optimize human-computer interaction scheme, the convenience and accuracy of application program controlling are improved.
In a first aspect, the embodiment of the present application provides a kind of man-machine interaction method based on depth of view information, including:
When detecting that destination application starts, control 3D depth cameras obtain facial information, wherein, the face
Information includes the face-image with depth of view information;
User Status is determined according to the facial information;
Control instruction is determined, and the destination application is carried out according to the control instruction according to the User Status
Control.
Second aspect, the embodiment of the present application additionally provide a kind of human-computer interaction device based on depth of view information, the device packet
It includes:
Data obtaining module, for when detecting that destination application starts, control 3D depth cameras to obtain face
Information, wherein, the facial information includes the face-image with depth of view information;
State determining module, for determining User Status according to the facial information;
Application control module, for determining control instruction according to the User Status, and according to the control instruction to institute
Destination application is stated to be controlled.
The third aspect, the embodiment of the present application additionally provide a kind of computer readable storage medium, are stored thereon with computer
Program realizes the human-computer interaction based on depth of view information as described in above-mentioned first aspect when the computer program is executed by processor
Method.
Fourth aspect, the embodiment of the present application additionally provide a kind of mobile terminal, including 3D depth cameras, memory, place
The computer program managed device and storage on a memory and can run in processor, the 3D depth cameras include common camera
And infrared camera, for shooting the face-image with depth of view information;The processor performs real during the computer program
The now man-machine interaction method based on depth of view information as described in above-mentioned first aspect.
The embodiment of the present application provides a kind of human-computer interaction scheme based on depth of view information, by detecting intended application journey
When sequence starts, control 3D depth cameras obtain facial information;User Status is determined according to the facial information;According to User Status
It determines control instruction, and destination application is controlled according to control instruction.Using above-mentioned technical proposal, based on scape
Deeply convince the face-image of breath to user's face into line trace, so that the motion state of user's head is obtained, by pre-set control
System instruction and the correspondence of User Status determine corresponding control instruction, and then, it is indicated according to the control to intended application journey
Sequence is controlled, and since user images have depth information, can be detected more detailed information, be improved the standard of motion detection
True property avoids the problem that application program being caused accidentally to respond because user accidentally touches, improves the accuracy and convenience of human-computer interaction, make
Mobile terminal " can see " user, improve the intelligent of human-computer interaction, enrich the application scenarios of human-computer interaction function.
Description of the drawings
Fig. 1 is a kind of flow chart of man-machine interaction method based on depth of view information provided by the embodiments of the present application;
Fig. 2 is the flow chart of another man-machine interaction method based on depth of view information provided by the embodiments of the present application;
Fig. 3 is a kind of scheme schematic diagram for calculating reference offset angle provided by the embodiments of the present application;
Fig. 4 is a kind of structure diagram of human-computer interaction device based on depth of view information provided by the embodiments of the present application;
Fig. 5 is a kind of structure diagram of mobile terminal provided by the embodiments of the present application;
Fig. 6 is a kind of structure diagram of smart mobile phone provided by the embodiments of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining the application rather than the restriction to the application.It also should be noted that in order to just
Part relevant with the application rather than entire infrastructure are illustrated only in description, attached drawing.
It should be mentioned that some exemplary embodiments are described as before exemplary embodiment is discussed in greater detail
The processing described as flow chart or method.Although each step is described as the processing of sequence, many of which by flow chart
Step can be implemented concurrently, concomitantly or simultaneously.In addition, the sequence of each step can be rearranged.When its operation
The processing can be terminated during completion, it is also possible to have the additional step being not included in attached drawing.The processing can be with
Corresponding to method, function, regulation, subroutine, subprogram etc..
Fig. 1 is a kind of flow chart of the man-machine interaction method based on depth of view information provided by the embodiments of the present application.This method
Can by being performed based on the human-computer interaction device of depth of view information, wherein, which can be by software and or hardware realization, generally
It can integrate in the terminal, such as the mobile terminal with 3D depth cameras.As shown in Figure 1, this method includes:
Step 110, when detecting that destination application starts, control 3D depth cameras obtain facial information.
It should be noted that when initializing human-computer interaction function, prompt user that input treats what is controlled by facial information
Application program is denoted as destination application, and destination application is stored in white list.Destination application includes video
Using, voice applications and e-book etc..It is understood that destination application can also be system default can pass through face
The application of portion's information control, the destination application are configured in movement eventually in the form of configuration file before mobile terminal manufacture
In end.
It should be noted that 3D depth cameras can be used for image of the shooting with depth of view information, can detect more
Kind user action provides the various control action for destination application, enriches the type of control action.Optionally,
3D depth cameras are including the depth camera based on structure light Range finder and based on TOF (Time Of Flight) ranging
Depth camera.
For example, the depth camera based on structure light Range finder includes common camera (for example, it may be RGB
) and infrared camera (can be infrared camera) camera.Infrared camera projects the photo structure of certain pattern
In current scene to be captured, each people or object surface in the scene is formed by the people in the scene or the modulated light of object
3-D view, then above-mentioned striation 3-D view is detected by common camera can obtain striation two dimension fault image.Light
The distortion degree of item is depended in the relative position between common camera and infrared camera and current scene to be captured
The surface shape exterior feature or height of each people or object.Due to the phase between the common camera in depth camera and infrared camera
It is certain to position, therefore, each people or object in scene can be reappeared by the image coordinate of the striation two dimension fault image
Surface tri-dimensional profile, so as to obtain depth information.Structure light Range finder has higher resolution ratio and measurement accuracy,
The accuracy of the depth information of acquisition can be promoted.
Optionally, 3D depth cameras can also be the depth camera based on TOF measurement in the embodiment of the present application, pass through
The modulation infrared light emission that sensor record is sent out from luminescence unit is to object, then from the reflected phase change of object,
According to the light velocity in the range of one wavelength, entire scene depth distance can be obtained in real time.It is each in current scene to be captured
Depth location residing for a people or object is different, thus modulate infrared light from be issued to receive used in the time be it is different, such as
This, can obtain the depth information of scene.Not by object when depth camera based on TOF Range finders calculates depth information
The influence of the gray scale and feature on surface, and depth information can be rapidly calculated, there is very high real-time.
It should be noted that facial information includes the face-image with depth of view information.In the embodiment of the present application, by moving
The state of dynamic terminal monitoring destination application.If detecting, the destination application is activated, with the destination application
Start-up operation perform parallel open 3D depth cameras operation.After 3D depth cameras are opened, control 3D depth is taken the photograph
As head shoots user.It should be noted that the face of user is shot to obtain face by 3D depth cameras
Image.The complete face-image of user is had not been obtained by 3D depth cameras if detecting, user is prompted to adjust facial appearance
State.Optionally, a prompting frame can be shown in the preview interface of camera, to prompt user by the facial alignment prompting frame.
Illustratively, the mode that control 3D depth cameras shoot user can be the period control according to setting
3D depth cameras shoot the face of user, obtain multiframe face-image.
Step 120 determines User Status according to the facial information.
It should be noted that it presets and preset control instruction corresponding User Status, including but not limited to user
The head that swings corresponds to page turning or cuts the controls such as song instruction, and head goes to setting position and stop the shape of setting time by user
State it is corresponding with the control instruction of video fast forward and, the head bias angle of user is more than that set angle threshold value switches with video
Control instruction correspond to.The head for example, user swings, corresponding to the page-turning instruction of e-book, i.e. user's swinging head to the right
Portion is corresponding with control instruction " lower one page ", and it is corresponding with control instruction " page up " that user swings head to the left.For another example, head is to the right
The deviation angle for deflecting to setting position is less than set angle threshold value, if when the residence time of the installation position belonging to setting
Between section, then the video fast forward first time length played in control targe Video Applications.If user's head deflects to the right
The deviation angle of setting position is less than set angle threshold value, and the residence time in the installation position is more than setting time threshold value,
Video is then controlled to continue F.F., until detecting that User Status changes, just stops the forwarding operation to the video.
In the embodiment of the present application, the deviation angle of face is determined according to the depth of view information of face-image.Due to depth of view information
The spatial relation of face's pixel is reflected, the deviation angle of face can be calculated by depth of view information.Can be specifically,
It identifies the position of the eyes in face-image, face's symmetry axis is determined according to eyes.In face's face 3D depth cameras, by
It is essentially identical in left face region and right face region and the distance of 3D depth cameras, so extracting left face region and You Lian areas respectively
The setting sampled point in domain, the depth of view information of the setting sampled point are essentially identical.If user's head deflects, left face region with
The depth of view information in right face region can change therewith, and left face region is made to be in different depth planes from right face region, and then,
The depth of view information for setting sampled point is no longer identical.It can the triangle relation meter based on the depth of view information in left face region and right face region
Calculate the deviation angle of face.Illustratively, the setting sampled point of quantity is set by left face regional choice respectively, and corresponding by the right side
The setting sampled point of the identical quantity of face regional choice forms setting sampled point pair, according to the depth of view information of setting sampled point pair, adopts
Calculate the reference offset angle of each setting sampled point pair respectively with arctan function, the average value for calculating reference offset angle is made
Deviation angle for face.Optionally, it can select to form close to the corresponding pixel in the left eye angle of bridge of the nose side and right eye angle
Sampled point pair is set, it can also as the setting straight line close to the left eye angle of bridge of the nose side and where right eye angle, (setting be straight respectively
Line and eyes line are vertical) on corresponding selection sampled point etc..
It is understood that determine that the application does not make specifically there are many kinds of the modes of User Status according to facial information
Limit, for example, it is also possible in advance shooting user face towards each predetermined angle face-image, and as image template into
Row storage.When needing to determine User Status according to facial information, can be carried out according to the face-image of shooting with image template
Images match, to determine the deviation angle of face.
In the embodiment of the present application, user's head can be determined by comparing the corresponding face-image of two neighboring shooting time
At the time of the initial time and user's head of rotation stop operating.When detecting that user's head stops operating, according to the moment
The depth of view information of face-image determine the deviation angle of face.In addition, when detecting that user's head stops operating, triggering meter
When device start, start timing, and stop timing when detecting that user's head moves again, to record head in the deviation angle
Corresponding position residence time.
Step 130 determines control instruction according to the User Status, and according to the control instruction to the intended application
Program is controlled.
It should be noted that control be designated as operation instruction corresponding with the control instruction of destination application, including but
F.F. is not limited to, is retreated, is switched to next file, switches to a file and page turning.It presets and preset control
Indicate corresponding User Status, and by control instruction and User Status associated storage in white list.
In the embodiment of the present application, mobile terminal is inquired pre-set after User Status is determined according to the User Status
White list, it may be determined that control instruction corresponding with the User Status determines that the control indicates corresponding instruction, which can be with
It is identified and is performed by destination application, send the instruction to destination application.Destination application is receiving the instruction
When, the corresponding operation of the instruction is performed, is indicated with responding the corresponding control of the instruction.For example, it is transported in target video application program
In capable process, detect that the head of user is offset to the right set angle, and stop in the corresponding position of the set angle
3s, it is assumed that the set angle is less than set angle threshold value and residence time input setting time section, it is determined that control instruction is
Controlling video fast forward, (time was not limited to 5 minutes, can also voluntarily be set by user the system default time in 5 minutes
It is fixed), the corresponding instruction of control instruction is sent to target video application program, to control currently playing video file F.F. 5
Minute.For another example, during target video application program is run, detect that the head of user is offset to the right set angle,
If the set angle is more than set angle threshold value, it is determined that control instruction is Switch Video (playing next collection), sends the control
System instruction plays next collection of current video with control to target video application program.
The technical solution of the present embodiment, by the way that when detecting that destination application starts, control 3D depth cameras obtain
Take facial information;User Status is determined according to the facial information;Control instruction is determined, and indicate according to control according to User Status
Destination application is controlled.Using above-mentioned technical proposal, based on the face-image with depth of view information to user's face
Into line trace, so as to obtain the motion state of user's head, close corresponding with User Status is indicated by pre-set control
System determines corresponding control instruction, and then, destination application is controlled according to control instruction, due to user images
With depth information, more detailed information can be detected, improve the accuracy of motion detection, avoid causing because user accidentally touches
The problem of application program accidentally responds, improves the accuracy and convenience of human-computer interaction, mobile terminal is allow " to see " user,
The intelligent of human-computer interaction is improved, enriches the application scenarios of human-computer interaction function.
It should be noted that when detecting that user uses the human-computer interaction function for the first time, the exhibition in a manner of guiding interface
Show User Status and the correspondence of control instruction, to prompt user's control action that can be inputted.
Fig. 2 is the flow chart of another man-machine interaction method based on depth of view information provided by the embodiments of the present application.Such as Fig. 2
Shown, this method includes:
The common camera that step 210, control 3D depth cameras include obtains face corresponding two according to the setting period
Tie up image.
It should be noted that 3D depth cameras include common camera and infrared camera.Detecting user's unlatching
During a certain application program, the application identities (can be packet name or process name etc.) of the application program are obtained, are marked according to the application
Preset white list is inquired, judges whether the application program is destination application.It is destination application in the application program
When, control common camera is opened, and according to the corresponding two dimensional image of setting period shooting face.Optionally, it is commonly imaging
After head is opened, detect in preview screen whether include face, if so, according to the corresponding X-Y scheme of setting period shooting face
Otherwise picture, prompts user to adjust facial pose, until detecting face in preview screen.By comparing adjacent shooting time
Two dimensional image, determine whether user rotates head.When detecting that user rotates head, the corresponding two dimension of one frame face of shooting
Image, the first image as initial time.Sequence obtains the two dimensional image of current shooting and the original graph of a upper shooting time
As being compared, to determine the head movement stop timing, when detecting that head movement stops, a frame face corresponding two is shot
Image is tieed up, is denoted as the second image.
Step 220 determines the corresponding facial characteristics of the two dimensional image.
In the embodiment of the present application, the human face region included using contour detecting technology to the two dimensional image is detected, really
Determine facial contour, and then, face area is determined according to facial contour.
It is understood that the embodiment of the present application does not limit the meaning of facial characteristics specifically, facial characteristics can also be
Accounting of the face pixel in preview screen.For example, the human face region that the two dimensional image includes is determined, so as to obtain face
The maximum longitudinal resolution of the touch screen long side direction of mobile terminal is parallel in region, obtains in human face region and is parallel to movement
The maximum lateral resolution of the touch screen short side direction of terminal is obtained according to the maximum longitudinal resolution with maximum lateral resolution
The size of the corresponding size of the human face region and touch screen is divided by obtain face pixel pre- by the corresponding size of human face region
The accounting look in picture.
Step 230 judges whether the two dimensional image meets setting condition according to the facial characteristics, if so, performing
Step 240, it otherwise returns and performs step 210.
Determine the face area difference of above-mentioned first image and the second image, and by the face area difference and given threshold
It is compared, judges whether the two dimensional image meets setting condition according to comparison result.Illustratively, in the face area difference
During less than given threshold, determine that the two dimensional image is unsatisfactory for imposing a condition, avoid user by a small margin head variation be detected and
There is accidentally control situation, improve the control accuracy of mobile terminal, for example, in viewing video or e-book can be read to avoid user
When, the problem of accidentally controlling is triggered because sneeze is beaten.When the face area difference is more than given threshold, the X-Y scheme is determined
It imposes a condition as meeting.
Step 240 opens the infrared camera that the 3D depth cameras include, and passes through the infrared camera and general
Logical camera shooting face-image, closes the infrared camera.
When the two dimensional image meets and imposes a condition, the infrared camera that the 3D depth cameras include is opened, passes through institute
State infrared photography head to head portion movement the stop timing facial information shot, obtain depth image, and pass through and commonly image
Head shoots the corresponding two dimensional image of an at least frame face again, and three-dimensional surface is formed by the depth image and the two dimensional image re-shoot
Portion's image.
It is understood that during User Status detects, facial movement is typically detected by common camera
And the terminal of single facial movement.Single facial movement can be included by above-mentioned initial time to head movement stop timing
Motion process, and the terminal of single facial movement be the head movement stop timing.It is opened when detecting the terminal infrared
Camera shoots three-dimensional face image, after shooting to obtain depth image by infrared camera, closes the infrared camera,
The power consumption of mobile terminal can be reduced.
It optionally, can also be by the second image that common camera is shot in the head movement stop timing and the infrared photography
The depth image of head shooting forms three dimensional face image.
Step 250 determines User Status according to the three dimensional face image.
The deviation angle of face is determined according to the corresponding depth of view information of three dimensional face image, and records head and is transported on head
Dynamic stop position residence time, User Status include the deviation angle and head when head movement stop position stops
Between.
It identifies the three dimensional face image, determines the position of face in 3-D view, so that it is determined that human face region and face
The symmetry axis in region.Human face region is divided by left face region and right face region with the symmetry axis.By the setting position in left face region
The characteristic point of extraction setting quantity, and on the basis of the symmetry axis, determine mirror image features point of this feature point in right face region, by
Characteristic point and mirror image features' point form setting sampled point pair.The depth of view information and setting for obtaining each setting sampled point pair are adopted
The distance of sampling point centering feature point and mirror image features' point calculates the reference offset of each setting sampled point pair using arctan function
Angle.By taking a pair sets sampled point as an example, illustrate the numerical procedure of reference offset angle, Fig. 3 is provided by the embodiments of the present application
A kind of scheme schematic diagram for calculating reference offset angle, as shown in figure 3, L1 and L2 are respectively characteristic point 320 and mirror image features' point
330 arrive the distance of 3D depth cameras 310, and as 330 corresponding depth of view information of characteristic point 320 and mirror image features' point, W is characterized
Point 320 and mirror image features' point the distance between 330.Assuming that the head of user deflects to the left, then symmetry axis AB is by first position 340
Become the corresponding second position 350, and characteristic point 320 and mirror image features' point 330 are symmetrical about the symmetry axis AB of the second place,
Using the deviation angle of symmetry axis AB as 330 corresponding reference offset angle [alpha] of characteristic point 320 and mirror image features' point, may be used
Following formula calculate reference offset angle [alpha]:
It is appreciated that yes, each reference offset angle to setting sampled point pair can be calculated using above-mentioned formula, so as to,
According to the deviation angle of the reference offset angle-determining face.For example, the average value of reference offset angle can be calculated, as face
The deviation angle in portion.For another example, descending arrangement can be carried out to the reference offset angle, using maximum reference offset angle as face
Deviation angle, can also using minimum reference offset angle or positioned at queue centre position reference offset angle as face
Deviation angle.
Step 260 inquires pre-set white list according to the User Status, determines control corresponding with the User Status
Instruction.
It should be noted that User Status includes the deviation angle of face and head is stopped in the corresponding position of the deviation angle
The time stayed.
The corresponding instruction of control instruction is sent to the destination application by step 270.
The technical solution of the present embodiment, by the way that the common camera that 3D depth cameras include is controlled to be obtained according to the setting period
Take the corresponding two dimensional image of face, the two dimensional image meet impose a condition when, open the 3D depth cameras include it is infrared
Camera, by the infrared photography, head to head portion moves the facial information of stop timing and is shot, and obtains depth image, real
The terminal of common camera detection facial movement and single facial movement is now first passed through, is opened when detecting the terminal infrared
Camera to shoot three-dimensional face image, can reduce the power consumption of mobile terminal, extend cruise duration.In addition, judge X-Y scheme
Seem that no satisfaction imposes a condition, can be effectively prevented from error detection causes to control the mistake of destination application, further improves
The control accuracy of mobile terminal.
Fig. 4 is a kind of structure diagram of human-computer interaction device based on depth of view information provided by the embodiments of the present application.The dress
Putting can be integrated in mobile terminal with used software and or hardware realization, such as the mobile end with 3D depth cameras
End, for performing the man-machine interaction method provided by the embodiments of the present application based on depth of view information.As shown in figure 4, the device includes:
Data obtaining module 410, for when detecting that destination application starts, control 3D depth cameras to obtain face
Portion's information, wherein, the facial information includes the face-image with depth of view information;
State determining module 420, for determining User Status according to the facial information;
Application control module 430, for determining control instruction according to the User Status, and according to the control instruction pair
The destination application is controlled.
The embodiment of the present application provides a kind of human-computer interaction device based on depth of view information, based on the face with depth of view information
Image to user's face into line trace, so that the motion state of user's head is obtained, by pre-set control instruction and user
The correspondence of state determines corresponding control instruction, and then, destination application is controlled according to control instruction,
Due to user images have depth information, can detect more detailed information, improve the accuracy of motion detection, avoid because
User is accidentally touched the problem of application program is caused accidentally to respond, and is improved the accuracy and convenience of human-computer interaction, is made mobile terminal can
With " seeing " user, the intelligent of human-computer interaction is improved, enriches the application scenarios of human-computer interaction function.
Optionally, data obtaining module 410 includes:
Two dimensional image acquisition submodule, for the common camera that 3D depth cameras include to be controlled to be obtained according to the setting period
Take the corresponding two dimensional image of face;
Face image capture submodule, for when the two dimensional image meets and imposes a condition, opening the 3D depth and taking the photograph
Face-image is shot as the infrared camera that head includes, and by the infrared camera and common camera, is closed described red
Outer camera.
Optionally, it further includes:
Characteristic determination module, the common camera for including in control 3D depth cameras obtain face according to the setting period
After the corresponding two dimensional image in portion, the corresponding facial characteristics of the two dimensional image is determined;
Condition judgment module, for judging whether the two dimensional image meets setting condition according to the facial characteristics.
Optionally, condition judgment module is specifically used for:
Determine the face area difference of the first image and the second image, wherein, described first image is originated for head movement
The two dimensional image that moment shoots, the second image are the two dimensional image shot the head movement stop timing;
The face area difference with given threshold is compared, whether the two dimensional image is judged according to comparison result
Meet and impose a condition.
Further, face image capture submodule is specifically used for:
By the infrared photography head to head portion move the stop timing facial information shot, obtain depth image,
The depth image and face-image described in second image construction.
Optionally, state determining module 420 is specifically used for:
The deviation angle of face is determined according to the depth of view information of the face-image, and records head in the deviation angle
Corresponding position residence time.
Optionally, application control module 430 is specifically used for:
Pre-set white list is inquired according to the User Status, determines that control corresponding with the User Status refers to
Show, wherein, the control instruction includes F.F., retreats, switch to next file, switch to a file and page turning;
The control is indicated that corresponding instruction is sent to the destination application, wherein, described instruction is used to indicate
The destination application response control instruction, the destination application include Video Applications, voice applications and electronics
Book.
The embodiment of the present application also provides a kind of storage medium for including computer executable instructions, and the computer can perform
When being performed by computer processor for performing a kind of man-machine interaction method based on depth of view information, this method includes for instruction:
When detecting that destination application starts, control 3D depth cameras obtain facial information, wherein, the face
Information includes the face-image with depth of view information;
User Status is determined according to the facial information;
Control instruction is determined, and the destination application is carried out according to the control instruction according to the User Status
Control.
Storage medium --- any various types of memory devices or storage device.Term " storage medium " is intended to wrap
It includes:Install medium, such as CD-ROM, floppy disk or magnetic tape equipment;Computer system memory or random access memory, such as
DRAM, DDR RAM, SRAM, EDO RAM, blue Bath (Rambus) RAM etc.;Nonvolatile memory, such as flash memory, magnetic medium
(such as hard disk or optical storage);Memory component of register or other similar types etc..Storage medium can further include other
The memory or combination of type.In addition, storage medium can be located at program in the first computer system being wherein performed,
Or can be located in different second computer systems, second computer system is connected to the by network (such as internet)
One computer system.Second computer system can provide program instruction and be used to perform to the first computer." storage is situated between term
Matter " can include may reside in different location two of (such as in different computer systems by network connection) or
More storage mediums.Storage medium can store the program instruction that can be performed by one or more processors and (such as implement
For computer program).
Certainly, a kind of storage medium for including computer executable instructions that the embodiment of the present application is provided, computer
The man-machine interactive operation based on depth of view information that executable instruction is not limited to the described above, can also be performed the application and arbitrarily implements
Relevant operation in the man-machine interaction method based on depth of view information that example is provided.
The embodiment of the present application provides a kind of mobile terminal, has operating system in the mobile terminal, in the mobile terminal
The human-computer interaction device provided by the embodiments of the present application based on depth of view information can be integrated.Wherein, mobile terminal can be intelligent hand
Machine, PAD (tablet computer) and handheld device etc..Fig. 5 is a kind of structure diagram of mobile terminal provided by the embodiments of the present application.
As shown in figure 5, the mobile terminal includes 3D depth cameras 510, memory 520 and processor 530.The 3D depth cameras
510, including common camera and infrared camera, for shooting the face-image with depth of view information;The memory 520 is used
In storage computer program, face-image and User Status and the incidence relation of control instruction etc.;The processor 530 reads and holds
The computer program stored in the row memory 520.The processor 530 realizes following steps when performing the computer program:
When detecting that destination application starts, control 3D depth cameras obtain facial information, wherein, the facial information includes
Face-image with depth of view information;User Status is determined according to the facial information;It determines to control according to the User Status
Instruction, and the destination application is controlled according to the control instruction.The 3D depth cameras enumerated in above-mentioned example
Head, memory and processor are the part component of mobile terminal, and the mobile terminal can also include other components.With
For smart mobile phone, illustrate the possible structure of above-mentioned mobile terminal.Fig. 6 is a kind of smart mobile phone provided by the embodiments of the present application
Structure diagram.As shown in fig. 6, the smart mobile phone can include:Memory 601, central processing unit (Central Processing
Unit, CPU) 602 (also known as processor, hereinafter referred to as CPU), Peripheral Interface 603, RF (Radio Frequency, radio frequency) circuit
605th, voicefrequency circuit 606, loud speaker 611, touch screen 612, camera 613, power management chip 608, input/output (I/O)
Subsystem 609, other input/control devicess 610 and outside port 604, these components pass through one or more communication bus
Or signal wire 607 communicates.
It should be understood that diagram smart mobile phone 600 is only an example of mobile terminal, and smart mobile phone 600
Can have than more or less components shown in figure, two or more components can be combined or can be with
It is configured with different components.Various parts shown in figure can be including one or more signal processings and/or special
Hardware, software including integrated circuit are realized in the combination of hardware and software.
Detailed retouch is carried out with regard to the smart mobile phone of the human-computer interaction device provided in this embodiment based on depth of view information below
It states.
Memory 601, the memory 601 can be by access such as CPU602, Peripheral Interfaces 603, and the memory 601 can
To include high-speed random access memory, nonvolatile memory can also be included, such as one or more disk memory,
Flush memory device or other volatile solid-state parts.Computer program is stored in the memory 611, can also store face
Information, User Status white list corresponding with the incidence relation of control instruction and the corresponding white list of destination application etc..
The peripheral hardware that outputs and inputs of equipment can be connected to CPU602 and deposited by Peripheral Interface 603, the Peripheral Interface 603
Reservoir 601.
I/O subsystems 609, the I/O subsystems 609 can be by the input/output peripheral in equipment, such as touches 612 Hes
Other input/control devicess 610, are connected to Peripheral Interface 603.I/O subsystems 609 can include display controller 6091 and use
In the one or more input controllers 6092 for controlling other input/control devicess 610.Wherein, one or more input controls
Device 6092 receives electric signal from other input/control devicess 610 or sends electric signal to other input/control devicess 610,
His input/control devices 610 can include physical button (pressing button, rocker buttons etc.), dial, slide switch, manipulation
Bar clicks idler wheel.What deserves to be explained is input controller 6092 can with it is following any one connect:Keyboard, infrared port, USB
The indicating equipment of interface and such as mouse.
Touch screen 612, the touch screen 612 are the input interface and output interface between user terminal and user, can
User is shown to depending on output, visual output can include figure, text, icon, video etc..
Camera 613 can be 3D depth cameras, and the facial 3-D view of face is obtained by the camera 613,
And facial 3-D view is converted into electric signal, memory 601 is stored in by Peripheral Interface 603.
Display controller 6061 in I/O subsystems 609 receives electric signal from touch screen 612 or is sent out to touch screen 612
Electric signals.Touch screen 612 detects the contact on touch screen, and the contact detected is converted to and shown by display controller 6091
The interaction of user interface object on touch screen 612, that is, realize human-computer interaction, the user interface being shown on touch screen 612
Icon that object can be the icon of running game, be networked to corresponding network etc..What deserves to be explained is equipment can also include light
Mouse, light mouse are the extensions for not showing the touch sensitive surface visually exported or the touch sensitive surface formed by touch screen.
RF circuits 605 are mainly used for establishing the communication of mobile phone and wireless network (i.e. network side), realize mobile phone and wireless network
The data receiver of network and transmission.Such as transmitting-receiving short message, Email etc..Specifically, RF circuits 605 receive and send RF letters
Number, RF signals are also referred to as electromagnetic signal, and RF circuits 605 convert electrical signals to electromagnetic signal or electromagnetic signal is converted to telecommunications
Number, and communicated by the electromagnetic signal with communication network and other equipment.RF circuits 605 can include performing
The known circuit of these functions includes but not limited to antenna system, RF transceivers, one or more amplifiers, tuner, one
A or multiple oscillators, digital signal processor, CODEC (COder-DECoder, coder) chipset, user identifier mould
Block (Subscriber Identity Module, SIM) etc..
Voicefrequency circuit 606 is mainly used for receiving audio data from Peripheral Interface 603, which is converted to telecommunications
Number, and the electric signal is sent to loud speaker 611.
Loud speaker 611 for the voice signal for receiving mobile phone from wireless network by RF circuits 605, is reduced to sound
And play the sound to user.
Power management chip 608, the hardware for being connected by CPU602, I/O subsystem and Peripheral Interface are powered
And power management.
Mobile terminal provided by the embodiments of the present application, user's face is carried out based on the face-image with depth of view information with
Track, so as to obtain the motion state of user's head, by pre-set control instruction and the correspondence of User Status, determine
Corresponding control instruction, and then, destination application is controlled according to control instruction, since user images have depth
Information can detect more detailed information, improve the accuracy of motion detection, avoid leading to application program because user accidentally touches
The problem of accidentally responding, improves the accuracy and convenience of human-computer interaction, mobile terminal is allow " to see " user, improves people
Machine interacts intelligent, enriches the application scenarios of human-computer interaction function.
The human-computer interaction device based on depth of view information, storage medium and the mobile terminal provided in above-described embodiment can perform
The man-machine interaction method based on depth of view information that the application any embodiment is provided has and performs the corresponding function mould of this method
Block and advantageous effect.The not technical detail of detailed description in the above-described embodiments, reference can be made to the application any embodiment is provided
The man-machine interaction method based on depth of view information.
Note that it above are only the preferred embodiment of the application and institute's application technology principle.It will be appreciated by those skilled in the art that
The application is not limited to specific embodiment described here, can carry out for a person skilled in the art it is various it is apparent variation,
The protection domain readjusted and substituted without departing from the application.Therefore, although being carried out by above example to the application
It is described in further detail, but the application is not limited only to above example, in the case where not departing from the application design, also
It can include other more equivalent embodiments, and scope of the present application is determined by scope of the appended claims.
Claims (10)
1. a kind of man-machine interaction method based on depth of view information, which is characterized in that including:
When detecting that destination application starts, control 3D depth cameras obtain facial information, wherein, the facial information
Including the face-image with depth of view information;
User Status is determined according to the facial information;
Control instruction is determined, and the destination application is controlled according to the control instruction according to the User Status
System.
2. according to the method described in claim 1, it is characterized in that, control 3D depth cameras obtain facial information, including:
The common camera that control 3D depth cameras include obtains the corresponding two dimensional image of face according to the setting period;
When the two dimensional image meets and imposes a condition, the infrared camera that the 3D depth cameras include is opened, and pass through
The infrared camera and common camera shooting face-image, close the infrared camera.
3. it according to the method described in claim 2, it is characterized in that, is pressed in the common camera that control 3D depth cameras include
After obtaining the corresponding two dimensional image of face according to the setting period, further include:
Determine the corresponding facial characteristics of the two dimensional image;
Judge whether the two dimensional image meets setting condition according to the facial characteristics.
4. according to the method described in claim 3, it is characterized in that, whether the two dimensional image is judged according to the facial characteristics
Meet and impose a condition, including:
Determine the face area difference of the first image and the second image, wherein, described first image is head movement initial time
Obtained two dimensional image is shot, second image is the two dimensional image shot the head movement stop timing;
The face area difference with given threshold is compared, judges whether the two dimensional image meets according to comparison result
It imposes a condition.
5. according to the method described in claim 4, it is characterized in that, face is shot by the infrared camera and common camera
Portion's image, including:
By the infrared photography head to head portion move the stop timing facial information shot, obtain depth image, it is described
Depth image and face-image described in second image construction.
6. according to the method described in claim 1, it is characterized in that, determine User Status according to the facial information, including:
The deviation angle of face is determined according to the depth of view information of the face-image, and records head and is corresponded in the deviation angle
Position residence time.
7. method according to any one of claim 1 to 6, which is characterized in that determine to control according to the User Status
Instruction, and the destination application is controlled according to the control instruction, including:
Pre-set white list is inquired according to the User Status, determines control instruction corresponding with the User Status,
In, the control instruction includes F.F., retreats, switch to next file, switch to a file and page turning;
The control is indicated that corresponding instruction is sent to the destination application, wherein, described instruction is used to indicate described
The destination application response control instruction, the destination application include Video Applications, voice applications and e-book.
8. a kind of human-computer interaction device based on depth of view information, which is characterized in that including:
Data obtaining module, for when detecting that destination application starts, control 3D depth cameras to obtain facial information,
Wherein, the facial information includes the face-image with depth of view information;
State determining module, for determining User Status according to the facial information;
Application control module, for determining control instruction according to the User Status, and according to the control instruction to the mesh
Mark application program is controlled.
9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program quilt
The man-machine interaction method based on depth of view information as described in any one of claim 1 to 7 is realized when processor performs.
10. a kind of mobile terminal, including 3D depth cameras, memory, processor and storage on a memory and can handled
The computer program of device operation, the 3D depth cameras include common camera and infrared camera, have scape for shooting
Deeply convince the face-image of breath;It is characterized in that, the processor realizes such as claim 1 to 7 when performing the computer program
Any one of described in the man-machine interaction method based on depth of view information.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810005036.7A CN108241434B (en) | 2018-01-03 | 2018-01-03 | Man-machine interaction method, device and medium based on depth of field information and mobile terminal |
PCT/CN2018/122308 WO2019134527A1 (en) | 2018-01-03 | 2018-12-20 | Method and device for man-machine interaction, medium, and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810005036.7A CN108241434B (en) | 2018-01-03 | 2018-01-03 | Man-machine interaction method, device and medium based on depth of field information and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108241434A true CN108241434A (en) | 2018-07-03 |
CN108241434B CN108241434B (en) | 2020-01-14 |
Family
ID=62699338
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810005036.7A Expired - Fee Related CN108241434B (en) | 2018-01-03 | 2018-01-03 | Man-machine interaction method, device and medium based on depth of field information and mobile terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108241434B (en) |
WO (1) | WO2019134527A1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109240570A (en) * | 2018-08-29 | 2019-01-18 | 维沃移动通信有限公司 | A kind of page turning method, device and terminal |
WO2019134527A1 (en) * | 2018-01-03 | 2019-07-11 | Oppo广东移动通信有限公司 | Method and device for man-machine interaction, medium, and mobile terminal |
CN110502110A (en) * | 2019-08-07 | 2019-11-26 | 北京达佳互联信息技术有限公司 | A kind of generation method and device of interactive application program feedback information |
CN110662129A (en) * | 2019-09-26 | 2020-01-07 | 联想(北京)有限公司 | Control method and electronic equipment |
CN110956603A (en) * | 2018-09-25 | 2020-04-03 | Oppo广东移动通信有限公司 | Method and device for detecting edge flying spot of depth image and electronic equipment |
CN111126163A (en) * | 2019-11-28 | 2020-05-08 | 星络智能科技有限公司 | Intelligent panel, interaction method based on face angle detection and storage medium |
CN111327888A (en) * | 2020-03-04 | 2020-06-23 | 广州腾讯科技有限公司 | Camera control method and device, computer equipment and storage medium |
CN111367598A (en) * | 2018-12-26 | 2020-07-03 | 北京奇虎科技有限公司 | Action instruction processing method and device, electronic equipment and computer-readable storage medium |
CN111459264A (en) * | 2018-09-18 | 2020-07-28 | 阿里巴巴集团控股有限公司 | 3D object interaction system and method and non-transitory computer readable medium |
CN111583355A (en) * | 2020-05-09 | 2020-08-25 | 维沃移动通信有限公司 | Face image generation method and device, electronic equipment and readable storage medium |
CN112529770A (en) * | 2020-12-07 | 2021-03-19 | 维沃移动通信有限公司 | Image processing method, image processing device, electronic equipment and readable storage medium |
CN113091227A (en) * | 2020-01-08 | 2021-07-09 | 佛山市云米电器科技有限公司 | Air conditioner control method, cloud server, air conditioner control system and storage medium |
CN115086095A (en) * | 2021-03-10 | 2022-09-20 | Oppo广东移动通信有限公司 | Equipment control method and related device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103268153A (en) * | 2013-05-31 | 2013-08-28 | 南京大学 | Human-computer interactive system and man-machine interactive method based on computer vision in demonstration environment |
EP2595402A3 (en) * | 2011-11-21 | 2014-06-25 | Microsoft Corporation | System for controlling light enabled devices |
CN106648042A (en) * | 2015-11-04 | 2017-05-10 | 重庆邮电大学 | Identification control method and apparatus |
CN107479801A (en) * | 2017-07-31 | 2017-12-15 | 广东欧珀移动通信有限公司 | Displaying method of terminal, device and terminal based on user's expression |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101227878B1 (en) * | 2011-05-30 | 2013-01-31 | 김호진 | Display device and display method based on user motion |
TWI524258B (en) * | 2011-08-19 | 2016-03-01 | 鴻海精密工業股份有限公司 | Electronic book display adjustment system and method |
CN103218124B (en) * | 2013-04-12 | 2015-12-09 | 通号通信信息集团有限公司 | Based on menu control method and the system of depth camera |
CN107506752A (en) * | 2017-09-18 | 2017-12-22 | 艾普柯微电子(上海)有限公司 | Face identification device and method |
CN108241434B (en) * | 2018-01-03 | 2020-01-14 | Oppo广东移动通信有限公司 | Man-machine interaction method, device and medium based on depth of field information and mobile terminal |
-
2018
- 2018-01-03 CN CN201810005036.7A patent/CN108241434B/en not_active Expired - Fee Related
- 2018-12-20 WO PCT/CN2018/122308 patent/WO2019134527A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2595402A3 (en) * | 2011-11-21 | 2014-06-25 | Microsoft Corporation | System for controlling light enabled devices |
CN103268153A (en) * | 2013-05-31 | 2013-08-28 | 南京大学 | Human-computer interactive system and man-machine interactive method based on computer vision in demonstration environment |
CN106648042A (en) * | 2015-11-04 | 2017-05-10 | 重庆邮电大学 | Identification control method and apparatus |
CN107479801A (en) * | 2017-07-31 | 2017-12-15 | 广东欧珀移动通信有限公司 | Displaying method of terminal, device and terminal based on user's expression |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019134527A1 (en) * | 2018-01-03 | 2019-07-11 | Oppo广东移动通信有限公司 | Method and device for man-machine interaction, medium, and mobile terminal |
CN109240570A (en) * | 2018-08-29 | 2019-01-18 | 维沃移动通信有限公司 | A kind of page turning method, device and terminal |
CN111459264B (en) * | 2018-09-18 | 2023-04-11 | 阿里巴巴集团控股有限公司 | 3D object interaction system and method and non-transitory computer readable medium |
CN111459264A (en) * | 2018-09-18 | 2020-07-28 | 阿里巴巴集团控股有限公司 | 3D object interaction system and method and non-transitory computer readable medium |
CN110956603A (en) * | 2018-09-25 | 2020-04-03 | Oppo广东移动通信有限公司 | Method and device for detecting edge flying spot of depth image and electronic equipment |
CN111367598B (en) * | 2018-12-26 | 2023-11-10 | 三六零科技集团有限公司 | Method and device for processing action instruction, electronic equipment and computer readable storage medium |
CN111367598A (en) * | 2018-12-26 | 2020-07-03 | 北京奇虎科技有限公司 | Action instruction processing method and device, electronic equipment and computer-readable storage medium |
CN110502110A (en) * | 2019-08-07 | 2019-11-26 | 北京达佳互联信息技术有限公司 | A kind of generation method and device of interactive application program feedback information |
CN110502110B (en) * | 2019-08-07 | 2023-08-11 | 北京达佳互联信息技术有限公司 | Method and device for generating feedback information of interactive application program |
CN110662129A (en) * | 2019-09-26 | 2020-01-07 | 联想(北京)有限公司 | Control method and electronic equipment |
CN111126163A (en) * | 2019-11-28 | 2020-05-08 | 星络智能科技有限公司 | Intelligent panel, interaction method based on face angle detection and storage medium |
CN113091227A (en) * | 2020-01-08 | 2021-07-09 | 佛山市云米电器科技有限公司 | Air conditioner control method, cloud server, air conditioner control system and storage medium |
CN113091227B (en) * | 2020-01-08 | 2022-11-01 | 佛山市云米电器科技有限公司 | Air conditioner control method, cloud server, air conditioner control system and storage medium |
CN111327888A (en) * | 2020-03-04 | 2020-06-23 | 广州腾讯科技有限公司 | Camera control method and device, computer equipment and storage medium |
CN111583355A (en) * | 2020-05-09 | 2020-08-25 | 维沃移动通信有限公司 | Face image generation method and device, electronic equipment and readable storage medium |
CN111583355B (en) * | 2020-05-09 | 2024-01-23 | 维沃移动通信有限公司 | Face image generation method and device, electronic equipment and readable storage medium |
CN112529770A (en) * | 2020-12-07 | 2021-03-19 | 维沃移动通信有限公司 | Image processing method, image processing device, electronic equipment and readable storage medium |
CN112529770B (en) * | 2020-12-07 | 2024-01-26 | 维沃移动通信有限公司 | Image processing method, device, electronic equipment and readable storage medium |
CN115086095A (en) * | 2021-03-10 | 2022-09-20 | Oppo广东移动通信有限公司 | Equipment control method and related device |
Also Published As
Publication number | Publication date |
---|---|
CN108241434B (en) | 2020-01-14 |
WO2019134527A1 (en) | 2019-07-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108241434A (en) | Man-machine interaction method, device, medium and mobile terminal based on depth of view information | |
CN109348135A (en) | Photographic method, device, storage medium and terminal device | |
CN108566516B (en) | Image processing method, device, storage medium and mobile terminal | |
CN107635095A (en) | Shoot method, apparatus, storage medium and the capture apparatus of photo | |
CN107820020A (en) | Method of adjustment, device, storage medium and the mobile terminal of acquisition parameters | |
CN109194879A (en) | Photographic method, device, storage medium and mobile terminal | |
CN104902189A (en) | Picture processing method and picture processing device | |
JP2016531362A (en) | Skin color adjustment method, skin color adjustment device, program, and recording medium | |
CN108419019A (en) | It takes pictures reminding method, device, storage medium and mobile terminal | |
CN109639896A (en) | Block object detecting method, device, storage medium and mobile terminal | |
WO2022110614A1 (en) | Gesture recognition method and apparatus, electronic device, and storage medium | |
CN108646920A (en) | Identify exchange method, device, storage medium and terminal device | |
CN108650457A (en) | Automatic photographing method, device, storage medium and mobile terminal | |
CN107330868A (en) | image processing method and device | |
CN105827928A (en) | Focusing area selection method and focusing area selection device | |
CN108881544A (en) | A kind of method taken pictures and mobile terminal | |
CN107958223A (en) | Face identification method and device, mobile equipment, computer-readable recording medium | |
CN110505549A (en) | The control method and device of earphone | |
CN108494996A (en) | Image processing method, device, storage medium and mobile terminal | |
CN109726614A (en) | 3D stereoscopic imaging method and device, readable storage medium storing program for executing, electronic equipment | |
CN105335714A (en) | Photograph processing method, device and apparatus | |
CN110059547A (en) | Object detection method and device | |
WO2022099988A1 (en) | Object tracking method and apparatus, electronic device, and storage medium | |
CN106127166A (en) | A kind of augmented reality AR image processing method, device and intelligent terminal | |
CN108921815A (en) | It takes pictures exchange method, device, storage medium and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200114 |
|
CF01 | Termination of patent right due to non-payment of annual fee |