CN107807806A - Display parameters method of adjustment, device and electronic installation - Google Patents

Display parameters method of adjustment, device and electronic installation Download PDF

Info

Publication number
CN107807806A
CN107807806A CN201711038000.0A CN201711038000A CN107807806A CN 107807806 A CN107807806 A CN 107807806A CN 201711038000 A CN201711038000 A CN 201711038000A CN 107807806 A CN107807806 A CN 107807806A
Authority
CN
China
Prior art keywords
face
active user
models
eye
structure light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711038000.0A
Other languages
Chinese (zh)
Inventor
蒋国强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711038000.0A priority Critical patent/CN107807806A/en
Publication of CN107807806A publication Critical patent/CN107807806A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention discloses a kind of display parameters method of adjustment, device and electronic installation, wherein method includes:Projected to the active user of using terminal equipment, obtain the face 3D models of active user;Face 3D models are analyzed, obtain the positional information of eye in face 3D models;According to the positional information of eye and the positional information of terminal device, the viewing visual angle of calculating active user's using terminal equipment;The display parameters of terminal device are adjusted according to viewing visual angle, during so as to be located at the diverse location of terminal device in user, according to the position adjustment display parameters of user, user is enabled to watch the display screen with appropriate brightness and appropriate level of contrast in action in position, the display effect of terminal device display screen is improved, improves the experience of user's using terminal equipment.

Description

Display parameters method of adjustment, device and electronic installation
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of display parameters method of adjustment, device and electronics dress Put.
Background technology
At present, the light of the display screen of terminal device, reflection in all directions be it is different, with display screen distance On equal multiple positions, corresponding viewing visual angle is smaller, and the brightness of display screen is higher;Corresponding viewing visual angle is bigger, display The brightness of screen is lower.Wherein, viewing visual angle is the vertical direction angulation of viewing sight and display screen etc..So as in user During using terminal equipment, when user is located at bad viewing location, i.e. when the viewing visual angle of user is larger, display screen Brightness it is relatively low, reduce the display effect of display screen, influence the experience of user's using terminal equipment.
The content of the invention
The embodiment provides a kind of display parameters method of adjustment, device and electronic installation.
The display parameters method of adjustment of embodiment of the present invention includes:
Projected to the active user of using terminal equipment, obtain the face 3D models of the active user;
The face 3D models are analyzed, obtain the positional information of eye in the face 3D models;
According to the positional information of the eye and the positional information of the terminal device, calculate the active user and use The viewing visual angle of the terminal device;
The display parameters of the terminal device are adjusted according to the viewing visual angle.
Further, it is described to be projected to the active user of using terminal, obtain the face 3D moulds of the active user Type, including:
To active user's projective structure light;
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the face of active user depth Spend image;
According to the face depth image of the active user and the structure light image, the face of the active user is generated 3D models.
Further, phase information is described current to obtain corresponding to each pixel of the demodulation structure light image The face depth image of user, including:
Demodulate phase information corresponding to each pixel of the structure light image;
The phase information is converted into depth information;With
The face depth image is generated according to the depth information.
Further, the position analyzed the face 3D models, obtain eye in the face 3D models Information, including:
The face 3D models are analyzed, extract the characteristic point information of the face 3D models;
The characteristic point information of the face 3D models is compared with the eye feature point information to prestore, determines the people The positional information of eye in face 3D models.
Further, the display parameters include:Brightness and contrast;
It is described that the display parameters of the terminal device are adjusted according to the viewing visual angle, including:
Judge whether the viewing visual angle is more than default angular threshold of eye;
When the viewing visual angle is more than default angular threshold of eye, brightness and the contrast of the terminal device are improved.
Further, described method also includes:
Obtain the quantity of the active user using the terminal device;
When the quantity of the active user is more than predetermined number threshold value, the brightness and contrast of the terminal device are improved Degree.
The display parameters method of adjustment of embodiment of the present invention, by being thrown to the active user of using terminal equipment Shadow, obtain the face 3D models of active user;Face 3D models are analyzed, obtain the position letter of eye in face 3D models Breath;According to the positional information of eye and the positional information of terminal device, the viewing for calculating active user's using terminal equipment regards Angle;The display parameters of terminal device are adjusted according to viewing visual angle, so as to be located at the difference of terminal device in user During position, according to the position adjustment display parameters of user so that user can be watched in position with appropriate brightness and The display screen of appropriate level of contrast in action, the display effect of terminal device display screen is improved, improve the experience of user's using terminal equipment.
The display parameters adjusting apparatus of embodiment of the present invention includes:
Depth image acquisition component, the depth image acquisition component are used to carry out to the active user of using terminal equipment Projection, obtain the face 3D models of the active user;
Processor, the processor, for analyzing the face 3D models, obtain eye in the face 3D models The positional information in portion;
According to the positional information of the eye and the positional information of the terminal device, calculate the active user and use The viewing visual angle of the terminal device;
The display parameters of the terminal device are adjusted according to the viewing visual angle.
Further, the depth image acquisition component includes structured light projector and structure light video camera head, the structure Light projector is used for active user's projective structure light;
The structure light video camera head is used for,
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the face of active user depth Spend image;
According to the face depth image of the active user and the structure light image, the face of the active user is generated 3D models.
Further, the structure light video camera head is additionally operable to,
Demodulate phase information corresponding to each pixel of the structure light image;
The phase information is converted into depth information;With
The face depth image is generated according to the depth information.
Further, the processor is additionally operable to,
The face 3D models are analyzed, extract the characteristic point information of the face 3D models;
The characteristic point information of the face 3D models is compared with the eye feature point information to prestore, determines the people The positional information of eye in face 3D models.
Further, the display parameters include:Brightness and contrast;
The processor is additionally operable to,
Judge whether the viewing visual angle is more than default angular threshold of eye;
When the viewing visual angle is more than default angular threshold of eye, brightness and the contrast of the terminal device are improved.
Further, the processor is additionally operable to,
Obtain the quantity of the active user using the terminal device;
When the quantity of the active user is more than predetermined number threshold value, the brightness and contrast of the terminal device are improved Degree.
The display parameters adjusting apparatus of embodiment of the present invention, by being thrown to the active user of using terminal equipment Shadow, obtain the face 3D models of active user;Face 3D models are analyzed, obtain the position letter of eye in face 3D models Breath;According to the positional information of eye and the positional information of terminal device, the viewing for calculating active user's using terminal equipment regards Angle;The display parameters of terminal device are adjusted according to viewing visual angle, so as to be located at the difference of terminal device in user During position, according to the position adjustment display parameters of user so that user can be watched in position with appropriate brightness and The display screen of appropriate level of contrast in action, the display effect of terminal device display screen is improved, improve the experience of user's using terminal equipment.
The electronic installation of embodiment of the present invention includes one or more processors, memory and one or more programs. Wherein one or more of programs are stored in the memory, and are configured to by one or more of processors Perform, described program includes being used for the instruction for performing above-mentioned display parameters method of adjustment.
The computer-readable recording medium of embodiment of the present invention includes what is be used in combination with the electronic installation that can be imaged Computer program, the computer program can be executed by processor to complete above-mentioned display parameters method of adjustment.
The additional aspect and advantage of the present invention will be set forth in part in the description, and will partly become from the following description Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein:
Fig. 1 is the schematic flow sheet of the display parameters method of adjustment of some embodiments of the present invention.
Fig. 2 is the module diagram of the display parameters adjusting apparatus of some embodiments of the present invention.
Fig. 3 is the structural representation of the electronic installation of some embodiments of the present invention.
Fig. 4 is the schematic flow sheet of the display parameters method of adjustment of some embodiments of the present invention.
Fig. 5 is the schematic flow sheet of the display parameters method of adjustment of some embodiments of the present invention.
Fig. 6 (a) to Fig. 6 (e) is the schematic diagram of a scenario of structural light measurement according to an embodiment of the invention.
Fig. 7 (a) and Fig. 7 (b) structural light measurements according to an embodiment of the invention schematic diagram of a scenario.
Fig. 8 is the schematic flow sheet of the display parameters method of adjustment of some embodiments of the present invention.
Fig. 9 is the module diagram of the electronic installation of some embodiments of the present invention.
Figure 10 is the module diagram of the electronic installation of some embodiments of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Also referring to Fig. 1 to 2, the display parameters method of adjustment of embodiment of the present invention is used for electronic installation 1000.It is aobvious Show that parameter regulation means include:
S101, the active user to using terminal equipment are projected, and obtain the face 3D models of active user.
S102, face 3D models are analyzed, obtain the positional information of eye in face 3D models.
S103, according to the positional information of eye and the positional information of terminal device, calculate active user's using terminal and set Standby viewing visual angle.
S104, according to viewing visual angle the display parameters of terminal device are adjusted.
Referring to Fig. 3, the display parameters method of adjustment of embodiment of the present invention can be by the display of embodiment of the present invention Parameter adjustment controls 100 are realized.The display parameters adjusting apparatus 100 of embodiment of the present invention is used for electronic installation 1000.Display Parameter adjustment controls 100 include depth image acquisition component 12 and processor 20.In the present embodiment, step 102, step 103, step Rapid 104 can be realized by processor 20, and step 101 can be realized by depth image acquisition component 12.In addition, display parameters adjust Device 100 can also include visible image capturing first 11, for shooting the picture of the scene where active user, so as to according to picture Identify the specific orientation where active user.
That is, depth image acquisition component 12 is used to be projected to the active user of using terminal equipment, obtain The face 3D models of active user;Processor 20 is used to analyze face 3D models, obtains eye in face 3D models Positional information;According to the positional information of eye and the positional information of terminal device, active user's using terminal equipment is calculated Viewing visual angle;The display parameters of terminal device are adjusted according to viewing visual angle.
In the present embodiment, display parameters include:Brightness and contrast.Corresponding, processor 20 is additionally operable to, and judges that viewing regards Whether angle is more than default angular threshold of eye;When viewing visual angle is more than default angular threshold of eye, the brightness of terminal device and right is improved Degree of ratio.
In the present embodiment, with the equidistant multiple positions of display screen, display screen is corresponding for different viewing visual angles Position different brightness are provided.This, which sentences default angular threshold of eye, includes:Carried out exemplified by first angular threshold of eye and the second angular threshold of eye Explanation.For example, provide the first brightness value for position corresponding to the viewing visual angle less than the first angular threshold of eye;For more than the first visual angle Threshold value and the half that the first brightness value is provided less than position corresponding to the viewing visual angle of the second angular threshold of eye;For more than the second visual angle Position corresponding to the viewing visual angle of threshold value provides a quarter of the first brightness value.
Corresponding, when the viewing visual angle of user is less than the first angular threshold of eye, processor 20 can not have to display screen Brightness is adjusted;When the viewing visual angle of user is more than the first angular threshold of eye and is less than the second angular threshold of eye, processor 20 can So that the numerical value of brightness is double;When the viewing visual angle of user is more than the second angular threshold of eye, processor 20 can be by the number of brightness Value is multiplied by 4;When being in each position of terminal device so as to user, the brightness of display screen of correspondence position is suitable brightness. As user from position corresponding to the viewing visual angle more than the second angular threshold of eye, it is moved to more than the first angular threshold of eye and less than second Corresponding to the viewing visual angle of angular threshold of eye during position, processor 20 can halve the numerical value of brightness.When user is from more than second Position corresponding to the viewing visual angle of angular threshold of eye, it is moved to less than corresponding to the viewing visual angle of the first angular threshold of eye during position, place Managing device 20 can be by the numerical value of brightness divided by 4.
In the present embodiment, with the equidistant multiple positions of display screen, display screen is corresponding for different viewing visual angles Position provide different contrast.For example, provide the first contrast for position corresponding to the viewing visual angle less than the first angular threshold of eye Angle value;To provide the first contrast value more than the first angular threshold of eye and less than position corresponding to the viewing visual angle of the second angular threshold of eye Half;The a quarter of first contrast value is provided for position corresponding to the viewing visual angle more than the second angular threshold of eye.
Corresponding, when the viewing visual angle of user is less than the first angular threshold of eye, processor 20 can not have to display screen Contrast is adjusted;When the viewing visual angle of user is more than the first angular threshold of eye and is less than the second angular threshold of eye, processor 20 Can be double by the numerical value of contrast;When the viewing visual angle of user is more than the second angular threshold of eye, processor 20 will can contrast The numerical value of degree is multiplied by 4;When being in each position of terminal device so as to user, the display screen contrast of correspondence position is suitable Contrast.As user from position corresponding to the viewing visual angle more than the second angular threshold of eye, it is moved to more than the first angular threshold of eye And less than during position, processor 20 can halve the numerical value of contrast corresponding to the viewing visual angle of the second angular threshold of eye.When with Position corresponding to family from the viewing visual angle more than the second angular threshold of eye, it is moved to corresponding less than the viewing visual angle of the first angular threshold of eye Position when, processor 20 can be by the numerical value of contrast divided by 4.
Further, in the present embodiment, described method can also include:Obtain the active user's of using terminal equipment Quantity;When the quantity of active user is more than predetermined number threshold value, brightness and the contrast of terminal device are improved.
That is, processor 20 is additionally operable to obtain the quantity of the active user of using terminal equipment;Active user's When quantity is more than predetermined number threshold value, brightness and the contrast of terminal device are improved.
In the present embodiment, when the quantity of the active user of using terminal equipment is more than predetermined number threshold value, in order to ensure Positioned at each active user of diverse location, the display screen with suitable brightness and contrast, processor 20 can be watched Brightness and the contrast of terminal device can be improved so that positioned at each active user of diverse location, tool can be watched There is the display screen of suitable brightness and contrast.Wherein, predetermined number threshold value numerical value such as can be 1,2.
The display parameters adjusting apparatus 100 of embodiment of the present invention can apply to the electronic installation of embodiment of the present invention 1000.In other words, the electronic installation 1000 of embodiment of the present invention includes the display parameters adjustment dress of embodiment of the present invention Put 100.
In some embodiments, electronic installation 1000 includes mobile phone, tablet personal computer, notebook computer, Intelligent bracelet, intelligence Energy wrist-watch, intelligent helmet, intelligent glasses etc..
The display parameters method of adjustment of embodiment of the present invention, by being thrown to the active user of using terminal equipment Shadow, obtain the face 3D models of active user;Face 3D models are analyzed, obtain the position letter of eye in face 3D models Breath;According to the positional information of eye and the positional information of terminal device, the viewing for calculating active user's using terminal equipment regards Angle;The display parameters of terminal device are adjusted according to viewing visual angle, so as to be located at the difference of terminal device in user During position, according to the position adjustment display parameters of user so that user can be watched in position with appropriate brightness and The display screen of appropriate level of contrast in action, the display effect of terminal device display screen is improved, improve the experience of user's using terminal equipment.
Referring to Fig. 4, in some embodiments, step 101 specifically may comprise steps of:
S1011, to active user's projective structure light.
The structure light image that S1012, shooting are modulated through active user.
S1013, demodulation structure light image each pixel corresponding to phase information to obtain the face depth of active user Image.
S1014, face depth image and structure light image according to active user, generate the face 3D moulds of active user Type.
Referring again to Fig. 3, in some embodiments, depth image acquisition component 12 includes the He of structured light projector 121 Structure light video camera head 122.Step 1011 can realize by structured light projector 121, step 1012, step 1013 and step 1014 It can be realized by structure light video camera head 122.
In other words, structured light projector 121 can be used for active user's projective structure light;Structure light video camera head 122 can For shooting the structure light image modulated through active user;Phase information corresponding to each pixel of demodulation structure light image with To the face depth image of active user;According to the face depth image and structure light image of active user, active user is generated Face 3D models.
Specifically, structured light projector 121 is by the face or body of the project structured light of certain pattern to active user Afterwards, the structure light image after being modulated by active user can be formed in the face of active user or the surface of body.Structure light images Structure light image after first 122 shooting is modulated, then structure light image is demodulated to obtain depth image, and then obtain people Face depth image;With reference to face depth image and the structure light image at face position, the face 3D models of active user are generated. Wherein, the pattern of structure light can be laser stripe, Gray code, sine streak, non-homogeneous speckle etc..
Referring to Fig. 5, in some embodiments, phase corresponding to each pixel of step 1013 demodulation structure light image Information specifically may comprise steps of with obtaining the process of the face depth image of active user:
S10131, demodulation structure light image each pixel corresponding to phase information.
S10132, phase information is converted into depth information.
S10133, according to depth information generate face depth image.
Referring again to Fig. 2, in some embodiments, step 10131, step 10132 and step 10133 can be by tying Structure light video camera head 122 is realized.
In other words, structure light video camera head 122 can be further used for phase corresponding to each pixel of demodulation structure light image Position information;Phase information is converted into depth information;Face depth image is generated with according to depth information.
Specifically, compared with non-modulated structure light, the phase information of the structure light after modulation is changed, and is being tied The structure light showed in structure light image is to generate the structure light after distortion, wherein, the phase information of change can characterize The depth information of object.Therefore, structure light video camera head 122 demodulates phase corresponding to each pixel in structure light image and believed first Breath, calculates depth information, so as to obtain final face depth image further according to phase information.
In order that those skilled in the art be more apparent from according to structure light come gather active user face or The process of the depth image of body, illustrated below by taking a kind of widely used optical grating projection technology (fringe projection technology) as an example Its concrete principle.Wherein, optical grating projection technology belongs to sensu lato area-structure light.
As shown in Fig. 6 (a), when being projected using area-structure light, sine streak is produced by computer programming first, And sine streak is projected to measured object by structured light projector 121, recycle structure light video camera head 122 to shoot striped by thing Degree of crook after body modulation, then demodulates the curved stripes and obtains phase, then phase is converted into depth information to obtain Depth image.The problem of to avoid producing error or error coupler, needed before carrying out depth information collection using structure light to depth Image collection assembly 12 carries out parameter calibration, and demarcation includes geometric parameter (for example, structure light video camera head 122 and project structured light Relative position parameter between device 121 etc.) demarcation, the inner parameter and structured light projector 121 of structure light video camera head 122 The demarcation of inner parameter etc..
Specifically, the first step, computer programming produce sine streak.Need to obtain using the striped of distortion due to follow-up Phase, for example phase is obtained using four step phase-shifting methods, therefore produce four width phase differences here and beStriped, then structure light throw Emitter 121 projects the four spokes line timesharing on measured object (mask shown in Fig. 6 (a)), and structure light video camera head 122 collects Such as the figure on Fig. 6 (b) left sides, while to read the striped of the plane of reference shown on the right of Fig. 6 (b).
Second step, carry out phase recovery.The bar graph that structure light video camera head 122 is modulated according to four width collected is (i.e. Structure light image) to calculate the phase diagram by phase modulation, now obtained be to block phase diagram.Because four step Phase-shifting algorithms obtain Result be that gained is calculated by arctan function, therefore the phase after structure light modulation is limited between [- π, π], that is, Say, the phase after modulation exceedes [- π, π], and it can restart again.Shown in the phase main value such as Fig. 6 (c) finally given.
Wherein, it is necessary to carry out the saltus step processing that disappears, it is continuous phase that will block phase recovery during phase recovery is carried out Position.As shown in Fig. 6 (d), the left side is the continuous phase bitmap modulated, and the right is to refer to continuous phase bitmap.
3rd step, subtract each other to obtain phase difference (i.e. phase information) by the continuous phase modulated and with reference to continuous phase, should Phase difference characterizes depth information of the measured object with respect to the plane of reference, then phase difference is substituted into the conversion formula (public affairs of phase and depth The parameter being related in formula is by demarcation), you can obtain the threedimensional model of the object under test as shown in Fig. 6 (e).
It should be appreciated that in actual applications, according to the difference of concrete application scene, employed in the embodiment of the present invention Structure light in addition to above-mentioned grating, can also be other arbitrary graphic patterns.
As a kind of possible implementation, the depth information of pattern light progress active user also can be used in the present invention Collection.
Specifically, the method that pattern light obtains depth information is that this spreads out using a diffraction element for being essentially flat board The relief diffraction structure that there are element particular phases to be distributed is penetrated, cross section is with two or more concavo-convex step embossment knots Structure.Substantially 1 micron of the thickness of substrate in diffraction element, each step it is highly non-uniform, the span of height can be 0.7 Micron~0.9 micron.Structure shown in Fig. 7 (a) is the local diffraction structure of the collimation beam splitting element of the present embodiment.Fig. 7 (b) is edge The unit of the cross sectional side view of section A-A, abscissa and ordinate is micron.The speckle pattern of pattern photogenerated has The randomness of height, and can with the difference of distance changing patterns.Therefore, depth information is being obtained using pattern light Before, it is necessary first to the speckle pattern in space is calibrated, for example, in the range of 0~4 meter of distance structure light video camera head 122, A reference planes are taken every 1 centimetre, then just save 400 width speckle images after demarcating, the spacing of demarcation is smaller, obtains Depth information precision it is higher.Then, structured light projector 121 is by pattern light projection to measured object (i.e. active user) On, the speckle pattern that the difference in height on measured object surface to project the pattern light on measured object changes.Structure light Camera 122 is shot project speckle pattern (i.e. structure light image) on measured object after, then by speckle pattern and demarcation early stage The 400 width speckle images preserved afterwards carry out computing cross-correlation one by one, and then obtain 400 width correlation chart pictures.Measured object in space Position where body can show peak value on correlation chart picture, above-mentioned peak value is superimposed and after interpolation arithmetic i.e. It can obtain the depth information of measured object.
Most diffraction lights are obtained after diffraction is carried out to light beam due to common diffraction element, but per beam diffraction light light intensity difference Greatly, it is also big to the risk of human eye injury.Re-diffraction even is carried out to diffraction light, the uniformity of obtained light beam is relatively low. Therefore, the effect projected using the light beam of common diffraction element diffraction to measured object is poor.Using collimation in the present embodiment Beam splitting element, the element not only have the function that to collimate uncollimated rays, also have the function that light splitting, i.e., through speculum The non-collimated light of reflection is emitted multi-beam collimation light beam, and the multi-beam collimation being emitted after collimating beam splitting element toward different angles The area of section approximately equal of light beam, flux of energy approximately equal, and then to carry out using the scatterplot light after the beam diffraction The effect of projection is more preferable.Meanwhile laser emitting light is dispersed to every light beam, the risk of injury human eye is reduce further, and dissipate Spot structure light is for other uniform structure lights of arrangement, when reaching same collection effect, the consumption of pattern light Electricity is lower.
Referring to Fig. 8, in some embodiments, step 102 specifically may comprise steps of:
S1021, face 3D models are analyzed, extract the characteristic point information of face 3D models.
S1022, the characteristic point information of face 3D models is compared with the eye feature point information to prestore, determines face The positional information of eye in 3D models.
Referring again to Fig. 2, in some embodiments, step 1021 and step 1022 can be realized by processor 20.
In other words, processor 20 can be further used for analyzing face 3D models, extract the spy of face 3D models Sign point information;The characteristic point information of face 3D models is compared with the eye feature point information to prestore, determines face 3D moulds The positional information of eye in type.
Wherein, before step 102, processor 20 can be to terminal user's projective structure light;Shooting is through terminal device The structure light image of user's modulation;Used with phase information corresponding to each pixel of structure light image is demodulated with obtaining terminal device The face depth image at family;According to the face depth image and structure light image of terminal user, terminal user is generated Face 3D models;The face 3D models of terminal user are analyzed, the characteristic point information of extraction wherein eye is carried out Storage, to be compared.Or processor 20 can also prestore the characteristic point information of other users eye in advance, to enter Row compares.
Also referring to Fig. 3 and Fig. 9, embodiment of the present invention also proposes a kind of electronic installation 1000.Electronic installation 1000 Including display parameters adjusting apparatus 100.Display parameters adjusting apparatus 100 can utilize hardware and/or software to realize.Display parameters Adjusting apparatus 100 includes imaging device 10 and processor 20.
Imaging device 10 includes visible image capturing first 11 and depth image acquisition component 12.
Specifically, it is seen that light video camera head 11 includes imaging sensor 111 and lens 112.Wherein, imaging sensor 111 wraps Color filter lens array (such as Bayer filter arrays) is included, the number of lens 112 can be one or more.In imaging sensor 111 Each imaging pixel senses luminous intensity and wavelength information in photographed scene, generates one group of raw image data;Image Sensor 111 sends this group of raw image data into processor 20, and processor 20 carries out denoising to raw image data, inserted The image of colour is obtained after the computings such as value;Processor 20 can be in various formats to each image pixel in raw image data Handle one by one, for example, each image pixel there can be a bit depth of 8,10,12 or 14 bits, processor 20 can be by identical or not Same bit depth is handled each image pixel.
Depth image acquisition component 12 includes structured light projector 121 and structure light video camera head 122, depth image collection group The depth information that part 12 can be used for catching active user is to obtain face depth image.Structured light projector 121 is used for structure Light projection to active user, wherein, structured light patterns can be laser stripe, Gray code, sine streak or random alignment Speckle pattern etc..Structure light video camera head 122 includes imaging sensor 1221 and lens 1222, and the number of lens 1222 can be one It is or multiple.Imaging sensor 1221 is used for the structure light image that capturing structure light projector 121 is projected on active user.Structure Light image can be sent by depth acquisition component 12 to processor 20 be demodulated, the processing such as phase recovery, phase information calculate with The depth information of active user is obtained, and then obtains the face depth image and face 3D models of active user.
In some embodiments, it is seen that the function of light video camera head 11 and structure light video camera head 122 can be by a camera Realize, in other words, imaging device 10 only includes a camera and a structured light projector 121, and above-mentioned camera is not only Original image can be shot, can also shoot structure light image.
Except using structure light obtain face depth image in addition to, can also by binocular vision method, based on differential time of flight (Time of Flight, TOF) even depth obtains the face depth image of active user as acquisition methods.
Processor 20 is further used for analyzing face 3D models, obtains the position letter of eye in face 3D models Breath;According to the positional information of eye and the positional information of terminal device, the viewing for calculating active user's using terminal equipment regards Angle;The display parameters of terminal device are adjusted according to viewing visual angle.
In addition, display parameters adjusting apparatus 100 also includes video memory 30.Video memory 30 can be embedded in electronics dress Put in 1000 or independently of the memory outside electronic installation 1000, and may include direct memory access (DMA) (Direct Memory Access, DMA) feature.What the view data or depth image acquisition component 12 of first 11 collection of visible image capturing gathered Structure light image related data, which can transmit, to be stored or is cached into video memory 30.Processor 20 can store from image Raw image data is read in device 30, also structure light image related data can be read from video memory 30 handle To depth image.In addition, view data and depth image are also storable in video memory 30, device 20 for processing is adjusted at any time With processing.
Display parameters adjusting apparatus 100 may also include display 50.Display 50 can directly display the face of active user 3D models are checked for user, or are entered by graphics engine or graphics processor (Graphics Processing Unit, GPU) The further processing of row.Display parameters adjusting apparatus 100 also includes encoder/decoder 60, and encoder/decoder 60 can compile solution The view data of code depth image etc., the view data of coding can be stored in video memory 30, and can be shown in image By decoder decompresses to be shown before showing on display 50.Encoder/decoder 60 can be by central processing unit (Central Processing Unit, CPU), GPU or coprocessor are realized.In other words, encoder/decoder 60 can be Any one or more in central processing unit (Central Processing Unit, CPU), GPU and coprocessor.
Display parameters adjusting apparatus 100 also includes control logic device 40.For imaging device 10 in imaging, processor 20 can root The data obtained according to imaging device are analyzed to determine one or more control parameters of imaging device 10 (for example, during exposure Between etc.) image statistics.Image statistics are sent to control logic device 40, control logic device 40 and controlled by processor 20 Imaging device 10 is imaged with the control parameter determined.Control logic device 40 may include to perform one or more routines (such as Firmware) processor and/or microcontroller.One or more routines can determine imaging device according to the image statistics of reception 10 control parameter.
Referring to Fig. 10, the electronic installation 1000 of embodiment of the present invention includes one or more processors 200, memory 300 and one or more programs 310.Wherein one or more programs 310 are stored in memory 300, and are configured to Performed by one or more processors 200.The display parameters that program 310 includes being used to perform above-mentioned any one embodiment are adjusted The instruction of adjusting method.
For example, program 310 includes being used for the instruction for performing the display parameters method of adjustment described in following steps:
Projected to the active user of using terminal equipment, obtain the face 3D models of the active user;
The face 3D models are analyzed, obtain the positional information of eye in the face 3D models;
According to the positional information of the eye and the positional information of the terminal device, calculate the active user and use The viewing visual angle of the terminal device;
The display parameters of the terminal device are adjusted according to the viewing visual angle.
For another example program 310 also includes being used for the instruction for performing the display parameters method of adjustment described in following steps:
To active user's projective structure light;
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the face of active user depth Spend image;
According to the face depth image of the active user and the structure light image, the face of the active user is generated 3D models.
The computer-readable recording medium of embodiment of the present invention includes being combined with the electronic installation 1000 that can be imaged making Computer program.Computer program can be performed by processor 200 to complete the display of above-mentioned any one embodiment ginseng Number adjusting method.
For example, computer program can be performed by processor 200 to complete the display parameters method of adjustment described in following steps:
Demodulate phase information corresponding to each pixel of the structure light image;
The phase information is converted into depth information;With
The face depth image is generated according to the depth information.
Adjusted for another example computer program can also be performed by processor 200 with completing the display parameters described in following steps Method:
The face 3D models are analyzed, extract the characteristic point information of the face 3D models;
The characteristic point information of the face 3D models is compared with the eye feature point information to prestore, determines the people The positional information of eye in face 3D models.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area Art personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specification Close and combine.
In addition, term " first ", " second " are only used for describing purpose, and it is not intended that instruction or hint relative importance Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the invention, " multiple " are meant that at least two, such as two, three It is individual etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize specific logical function or process Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium, which can even is that, to print the paper of described program thereon or other are suitable Medium, because can then enter edlin, interpretation or if necessary with it for example by carrying out optical scanner to paper or other media His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal Discrete logic, have suitable combinational logic gate circuit application specific integrated circuit, programmable gate array (PGA), scene Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above Embodiments of the invention are stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention System, one of ordinary skill in the art can be changed to above-described embodiment, change, replace and become within the scope of the invention Type.

Claims (14)

  1. A kind of 1. display parameters method of adjustment, it is characterised in that including:
    Projected to the active user of using terminal equipment, obtain the face 3D models of the active user;
    The face 3D models are analyzed, obtain the positional information of eye in the face 3D models;
    According to the positional information of the eye and the positional information of the terminal device, calculate described in active user's use The viewing visual angle of terminal device;
    The display parameters of the terminal device are adjusted according to the viewing visual angle.
  2. 2. according to the method for claim 1, it is characterised in that it is described to be projected to the active user of using terminal, obtain The face 3D models of the active user are taken, including:
    To active user's projective structure light;
    The structure light image that shooting is modulated through the active user;With
    Phase information corresponding to each pixel of the structure light image is demodulated to obtain the face depth map of the active user Picture;
    According to the face depth image of the active user and the structure light image, the face 3D moulds of the active user are generated Type.
  3. 3. according to the method for claim 2, it is characterised in that each pixel of the demodulation structure light image is corresponding Phase information to obtain the face depth image of the active user, including:
    Demodulate phase information corresponding to each pixel of the structure light image;
    The phase information is converted into depth information;With
    The face depth image is generated according to the depth information.
  4. 4. according to the method for claim 1, it is characterised in that it is described that the face 3D models are analyzed, obtain institute The positional information of eye in face 3D models is stated, including:
    The face 3D models are analyzed, extract the characteristic point information of the face 3D models;
    The characteristic point information of the face 3D models is compared with the eye feature point information to prestore, determines the face 3D The positional information of eye in model.
  5. 5. according to the method for claim 1, it is characterised in that the display parameters include:Brightness and contrast;
    It is described that the display parameters of the terminal device are adjusted according to the viewing visual angle, including:
    Judge whether the viewing visual angle is more than default angular threshold of eye;
    When the viewing visual angle is more than default angular threshold of eye, brightness and the contrast of the terminal device are improved.
  6. 6. according to the method for claim 1, it is characterised in that also include:
    Obtain the quantity of the active user using the terminal device;
    When the quantity of the active user is more than predetermined number threshold value, brightness and the contrast of the terminal device are improved.
  7. A kind of 7. display parameters adjusting apparatus, it is characterised in that including:
    Depth image acquisition component, the depth image acquisition component are used to be thrown to the active user of using terminal equipment Shadow, obtain the face 3D models of the active user;
    Processor, the processor, for analyzing the face 3D models, obtain eye in the face 3D models Positional information;
    According to the positional information of the eye and the positional information of the terminal device, calculate described in active user's use The viewing visual angle of terminal device;
    The display parameters of the terminal device are adjusted according to the viewing visual angle.
  8. 8. device according to claim 7, it is characterised in that the depth image acquisition component includes structured light projector With structure light video camera head, the structured light projector is used for active user's projective structure light;
    The structure light video camera head is used for,
    The structure light image that shooting is modulated through the active user;With
    Phase information corresponding to each pixel of the structure light image is demodulated to obtain the face depth map of the active user Picture;
    According to the face depth image of the active user and the structure light image, the face 3D moulds of the active user are generated Type.
  9. 9. device according to claim 8, it is characterised in that the structure light video camera head is additionally operable to,
    Demodulate phase information corresponding to each pixel of the structure light image;
    The phase information is converted into depth information;With
    The face depth image is generated according to the depth information.
  10. 10. device according to claim 7, it is characterised in that the processor is additionally operable to,
    The face 3D models are analyzed, extract the characteristic point information of the face 3D models;
    The characteristic point information of the face 3D models is compared with the eye feature point information to prestore, determines the face 3D The positional information of eye in model.
  11. 11. device according to claim 7, it is characterised in that the display parameters include:Brightness and contrast;
    The processor is additionally operable to,
    Judge whether the viewing visual angle is more than default angular threshold of eye;
    When the viewing visual angle is more than default angular threshold of eye, brightness and the contrast of the terminal device are improved.
  12. 12. device according to claim 7, it is characterised in that the processor is additionally operable to,
    Obtain the quantity of the active user using the terminal device;
    When the quantity of the active user is more than predetermined number threshold value, brightness and the contrast of the terminal device are improved.
  13. 13. a kind of electronic installation, it is characterised in that the electronic installation includes:
    One or more processors;
    Memory;With
    One or more programs, wherein one or more of programs are stored in the memory, and be configured to by One or more of computing devices, described program include being used for the display ginseng described in perform claim 1 to 6 any one of requirement The instruction of number adjusting method.
  14. A kind of 14. computer-readable recording medium, it is characterised in that the meter being used in combination including the electronic installation with that can image Calculation machine program, the computer program can be executed by processor to complete the display parameters described in claim 1 to 6 any one Method of adjustment.
CN201711038000.0A 2017-10-27 2017-10-27 Display parameters method of adjustment, device and electronic installation Pending CN107807806A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711038000.0A CN107807806A (en) 2017-10-27 2017-10-27 Display parameters method of adjustment, device and electronic installation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711038000.0A CN107807806A (en) 2017-10-27 2017-10-27 Display parameters method of adjustment, device and electronic installation

Publications (1)

Publication Number Publication Date
CN107807806A true CN107807806A (en) 2018-03-16

Family

ID=61582587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711038000.0A Pending CN107807806A (en) 2017-10-27 2017-10-27 Display parameters method of adjustment, device and electronic installation

Country Status (1)

Country Link
CN (1) CN107807806A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108668008A (en) * 2018-03-30 2018-10-16 广东欧珀移动通信有限公司 Electronic device, display parameters method of adjustment and Related product
CN110236551A (en) * 2019-05-21 2019-09-17 西藏纳旺网络技术有限公司 Acquisition methods, device, electronic equipment and the medium of user's cervical vertebra tilt angle
CN110316063A (en) * 2018-03-30 2019-10-11 比亚迪股份有限公司 The display control method and vehicle of vehicle-mounted display terminal system, vehicle-mounted display terminal
CN110992915A (en) * 2019-11-22 2020-04-10 京东方科技集团股份有限公司 Display device and display method thereof
CN111914693A (en) * 2020-07-16 2020-11-10 上海云从企业发展有限公司 Face posture adjusting method, system, device, equipment and medium
CN112445554A (en) * 2019-08-28 2021-03-05 中兴通讯股份有限公司 Display effect adjusting method and device, terminal equipment and storage medium
CN113126944A (en) * 2021-05-17 2021-07-16 北京的卢深视科技有限公司 Depth map display method, display device, electronic device, and storage medium
CN113840757A (en) * 2021-04-30 2021-12-24 华为技术有限公司 Display screen adjusting method and device
US11393381B2 (en) * 2020-06-16 2022-07-19 Hyundai Motor Company Vehicle and image display method for preventing display color distortion according to a view angle of a driver

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1510973A2 (en) * 2003-08-29 2005-03-02 Samsung Electronics Co., Ltd. Method and apparatus for image-based photorealistic 3D face modeling
CN101339607A (en) * 2008-08-15 2009-01-07 北京中星微电子有限公司 Human face recognition method and system, human face recognition model training method and system
CN103971408A (en) * 2014-05-21 2014-08-06 中国科学院苏州纳米技术与纳米仿生研究所 Three-dimensional facial model generating system and method
CN106157931A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 Control method, control device and electronic installation
CN106200926A (en) * 2016-06-28 2016-12-07 广东欧珀移动通信有限公司 Control method, control device and electronic installation
CN106507005A (en) * 2016-12-05 2017-03-15 乐视控股(北京)有限公司 The control method and device of backlight illumination
CN106991377A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 With reference to the face identification method, face identification device and electronic installation of depth information
CN107169483A (en) * 2017-07-12 2017-09-15 深圳奥比中光科技有限公司 Tasks carrying based on recognition of face

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1510973A2 (en) * 2003-08-29 2005-03-02 Samsung Electronics Co., Ltd. Method and apparatus for image-based photorealistic 3D face modeling
CN101339607A (en) * 2008-08-15 2009-01-07 北京中星微电子有限公司 Human face recognition method and system, human face recognition model training method and system
CN103971408A (en) * 2014-05-21 2014-08-06 中国科学院苏州纳米技术与纳米仿生研究所 Three-dimensional facial model generating system and method
CN106157931A (en) * 2016-06-28 2016-11-23 广东欧珀移动通信有限公司 Control method, control device and electronic installation
CN106200926A (en) * 2016-06-28 2016-12-07 广东欧珀移动通信有限公司 Control method, control device and electronic installation
CN106507005A (en) * 2016-12-05 2017-03-15 乐视控股(北京)有限公司 The control method and device of backlight illumination
CN106991377A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 With reference to the face identification method, face identification device and electronic installation of depth information
CN107169483A (en) * 2017-07-12 2017-09-15 深圳奥比中光科技有限公司 Tasks carrying based on recognition of face

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
肖嵩等: "《计算机图形学原理及应用》", 30 June 2014 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108668008A (en) * 2018-03-30 2018-10-16 广东欧珀移动通信有限公司 Electronic device, display parameters method of adjustment and Related product
CN110316063A (en) * 2018-03-30 2019-10-11 比亚迪股份有限公司 The display control method and vehicle of vehicle-mounted display terminal system, vehicle-mounted display terminal
CN108668008B (en) * 2018-03-30 2021-04-16 Oppo广东移动通信有限公司 Electronic device, display parameter adjusting method and device, and computer-readable storage medium
CN110236551A (en) * 2019-05-21 2019-09-17 西藏纳旺网络技术有限公司 Acquisition methods, device, electronic equipment and the medium of user's cervical vertebra tilt angle
CN112445554A (en) * 2019-08-28 2021-03-05 中兴通讯股份有限公司 Display effect adjusting method and device, terminal equipment and storage medium
CN110992915A (en) * 2019-11-22 2020-04-10 京东方科技集团股份有限公司 Display device and display method thereof
US11393381B2 (en) * 2020-06-16 2022-07-19 Hyundai Motor Company Vehicle and image display method for preventing display color distortion according to a view angle of a driver
CN111914693A (en) * 2020-07-16 2020-11-10 上海云从企业发展有限公司 Face posture adjusting method, system, device, equipment and medium
CN113840757A (en) * 2021-04-30 2021-12-24 华为技术有限公司 Display screen adjusting method and device
CN113126944A (en) * 2021-05-17 2021-07-16 北京的卢深视科技有限公司 Depth map display method, display device, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN107807806A (en) Display parameters method of adjustment, device and electronic installation
CN107797664A (en) Content display method, device and electronic installation
CN107610077A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107742296A (en) Dynamic image generation method and electronic installation
CN107734267A (en) Image processing method and device
CN107509045A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707831A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107707835A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107995434A (en) Image acquiring method, electronic device and computer-readable recording medium
CN107707838A (en) Image processing method and device
CN107610078A (en) Image processing method and device
CN107734264A (en) Image processing method and device
CN107509043A (en) Image processing method and device
CN107644440A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107705278A (en) The adding method and terminal device of dynamic effect
CN107742300A (en) Image processing method, device, electronic installation and computer-readable recording medium
CN107610127A (en) Image processing method, device, electronic installation and computer-readable recording medium
CN107705277A (en) Image processing method and device
CN107610076A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107454336A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107613223A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107527335A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107592491A (en) Video communication background display methods and device
CN107705243A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107613383A (en) Video volume adjusting method, device and electronic installation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180316