CN107886053A - Eyeglasses-wearing condition detection method, device and electronic installation - Google Patents
Eyeglasses-wearing condition detection method, device and electronic installation Download PDFInfo
- Publication number
- CN107886053A CN107886053A CN201711020161.7A CN201711020161A CN107886053A CN 107886053 A CN107886053 A CN 107886053A CN 201711020161 A CN201711020161 A CN 201711020161A CN 107886053 A CN107886053 A CN 107886053A
- Authority
- CN
- China
- Prior art keywords
- active user
- face
- models
- usage time
- structure light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/164—Detection; Localisation; Normalisation using holistic features
Abstract
The invention discloses a kind of eyeglasses-wearing condition detection method, device and electronic installation, wherein method includes:Projected to the active user of using terminal equipment, obtain the face 3D models of active user;Face 3D models are analyzed, extract the fisrt feature point information of ocular in face 3D models;Fisrt feature point information is compared with the second feature point information of ocular under the wearing spectacles state to prestore, obtains the similarity under ocular and wearing spectacles state between ocular in face 3D models;Determine whether active user wears glasses according to similarity, so as to the face 3D models by gathering active user, the characteristic information of ocular in face 3D models is extracted, and then determines whether active user wears glasses, improves detection efficiency and the degree of accuracy of eyeglasses-wearing state.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of eyeglasses-wearing condition detection method, device and electricity
Sub-device.
Background technology
Existing eyeglasses-wearing condition detection method, be that corresponding detection means is installed on glasses, with detect glasses with
The distance between user eyeball etc., and then determine whether glasses are worn on eyes of user.But above-mentioned eyeglasses-wearing state
For detection method, it is necessary to special glasses, cost is high, and is only able to detect whether some glasses is worn on eyes of user, with
In the case that common spectacles are worn at family, it can not determine whether user wears glasses, reduce eyeglasses-wearing state-detection efficiency
And the degree of accuracy.
The content of the invention
The embodiment provides a kind of eyeglasses-wearing condition detection method, device and electronic installation.
The eyeglasses-wearing condition detection method of embodiment of the present invention includes:
Projected to the active user of using terminal equipment, obtain the face 3D models of the active user;
The face 3D models are analyzed, extract the fisrt feature point letter of ocular in the face 3D models
Breath;
The fisrt feature point information and the second feature point information of ocular under the wearing spectacles state that prestores are entered
Row compares, and obtains the similarity under ocular and wearing spectacles state between ocular in the face 3D models;
Determine whether the active user wears glasses according to the similarity.
Further, it is described to be projected to the active user of using terminal, obtain the face 3D moulds of the active user
Type, including:
To active user's projective structure light;
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the face of active user depth
Spend image;
According to the face depth image of the active user and the structure light image, the face of the active user is generated
3D models.
Further, phase information is described current to obtain corresponding to each pixel of the demodulation structure light image
The face depth image of user, including:
Demodulate phase information corresponding to each pixel of the structure light image;
The phase information is converted into depth information;With
The face depth image is generated according to the depth information.
Further, it is described to be projected to the active user of using terminal, obtain the face 3D moulds of the active user
Before type, in addition to:
Projected to terminal user, obtain the face 3D models under terminal user's wearing spectacles state;
Face 3D models under the wearing spectacles state are analyzed, extract the face under the wearing spectacles state
The second feature point information of ocular in 3D models;
Store the second feature point information.
Further, described method also includes:
When it is determined that the active user wears glasses, the active user is supervised using the first eyeshield pattern
Control, the usage time that the active user uses the terminal device is obtained, is more than the first usage time in the usage time
During threshold value, the terminal device is locked;The first eyeshield pattern includes:First usage time threshold value;
When it is determined that the active user does not wear glasses, the active user is supervised using the second eyeshield pattern
Control;The usage time that the active user uses the terminal device is obtained, is more than the second usage time in the usage time
During threshold value, the terminal device is locked;The second eyeshield pattern includes:Second usage time threshold value;
The first usage time threshold value is less than the second usage time threshold value.
Further, the active user to using terminal equipment is projected, and obtains the face of the active user
Before 3D models, in addition to:
Judge whether to receive on & off switch wake-up signal or lift wake-up signal;
The active user to using terminal equipment is projected, and obtains the face 3D models of the active user, bag
Include:
If receive the on & off switch wake-up signal or it is described lift wake-up signal, to the current of using terminal equipment
User is projected, and obtains the face 3D models of the active user.
The eyeglasses-wearing condition detection method of embodiment of the present invention, by being carried out to the active user of using terminal equipment
Projection, obtain the face 3D models of the active user;The face 3D models are analyzed, extract the face 3D models
The fisrt feature point information of middle ocular;By ocular under the fisrt feature point information and the wearing spectacles state to prestore
Second feature point information be compared, obtain in the face 3D models ocular under ocular and wearing spectacles state
Between similarity;Determine whether the active user wears glasses according to the similarity, so as to work as by collection
The face 3D models of preceding user, the characteristic information of ocular in face 3D models is extracted, and then determine whether active user wears
With glasses, detection efficiency and the degree of accuracy of eyeglasses-wearing state are improved.
The eyeglasses-wearing condition checkout gear of embodiment of the present invention includes:
Depth image acquisition component, the depth image acquisition component are used to carry out to the active user of using terminal equipment
Projection, obtain the face 3D models of the active user;
Processor, the processor, for analyzing the face 3D models, extract eye in the face 3D models
The fisrt feature point information in portion region;
The fisrt feature point information and the second feature point information of ocular under the wearing spectacles state that prestores are entered
Row compares, and obtains the similarity under ocular and wearing spectacles state between ocular in the face 3D models;
Determine whether the active user wears glasses according to the similarity.
Further, the depth image acquisition component includes structured light projector and structure light video camera head, the structure
Light projector is used for active user's projective structure light;
The structure light video camera head is used for,
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the face of active user depth
Spend image;
According to the face depth image of the active user and the structure light image, the face of the active user is generated
3D models.
Further, the structure light video camera head is additionally operable to,
Demodulate phase information corresponding to each pixel of the structure light image;
The phase information is converted into depth information;With
The face depth image is generated according to the depth information.
Further, the depth image acquisition component is additionally operable to, and is projected to terminal user, is obtained terminal and is set
Face 3D models under standby user wearing spectacles state;
The processor is additionally operable to, and the face 3D models under the wearing spectacles state are analyzed, extract the pendant
The second feature point information of ocular in face 3D models under state of wearing glasses;Store the second feature point information.
Further, the processor is additionally operable to,
When it is determined that the active user wears glasses, the active user is supervised using the first eyeshield pattern
Control, the usage time that the active user uses the terminal device is obtained, is more than the first usage time in the usage time
During threshold value, the terminal device is locked;The first eyeshield pattern includes:First usage time threshold value;
When it is determined that the active user does not wear glasses, the active user is supervised using the second eyeshield pattern
Control;The usage time that the active user uses the terminal device is obtained, is more than the second usage time in the usage time
During threshold value, the terminal device is locked;The second eyeshield pattern includes:Second usage time threshold value;
The first usage time threshold value is less than the second usage time threshold value.
Further, the processor is additionally operable to, and is judged whether to receive on & off switch wake-up signal or is lifted wake-up letter
Number;
The depth image acquisition component is used for, receive the on & off switch wake-up signal or it is described lift wake-up letter
Number when, projected to the active user of using terminal equipment, obtain the face 3D models of the active user.
The eyeglasses-wearing condition checkout gear of embodiment of the present invention, by being carried out to the active user of using terminal equipment
Projection, obtain the face 3D models of the active user;The face 3D models are analyzed, extract the face 3D models
The fisrt feature point information of middle ocular;By ocular under the fisrt feature point information and the wearing spectacles state to prestore
Second feature point information be compared, obtain in the face 3D models ocular under ocular and wearing spectacles state
Between similarity;Determine whether the active user wears glasses according to the similarity, so as to work as by collection
The face 3D models of preceding user, the characteristic information of ocular in face 3D models is extracted, and then determine whether active user wears
With glasses, detection efficiency and the degree of accuracy of eyeglasses-wearing state are improved.
The electronic installation of embodiment of the present invention includes one or more processors, memory and one or more programs.
Wherein one or more of programs are stored in the memory, and are configured to by one or more of processors
Perform, described program includes being used for the instruction for performing above-mentioned eyeglasses-wearing condition detection method.
The computer-readable recording medium of embodiment of the present invention includes what is be used in combination with the electronic installation that can be imaged
Computer program, the computer program can be executed by processor to complete above-mentioned eyeglasses-wearing condition detection method.
The additional aspect and advantage of the present invention will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments
Substantially and it is readily appreciated that, wherein:
Fig. 1 is the schematic flow sheet of the eyeglasses-wearing condition detection method of some embodiments of the present invention.
Fig. 2 is the module diagram of the eyeglasses-wearing condition checkout gear of some embodiments of the present invention.
Fig. 3 is the structural representation of the electronic installation of some embodiments of the present invention.
Fig. 4 is the schematic flow sheet of the eyeglasses-wearing condition detection method of some embodiments of the present invention.
Fig. 5 is the schematic flow sheet of the eyeglasses-wearing condition detection method of some embodiments of the present invention.
Fig. 6 (a) to Fig. 6 (e) is the schematic diagram of a scenario of structural light measurement according to an embodiment of the invention.
Fig. 7 (a) and Fig. 7 (b) structural light measurements according to an embodiment of the invention schematic diagram of a scenario.
Fig. 8 is the schematic flow sheet of the eyeglasses-wearing condition detection method of some embodiments of the present invention.
Fig. 9 is the module diagram of the electronic installation of some embodiments of the present invention.
Figure 10 is the module diagram of the electronic installation of some embodiments of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Also referring to Fig. 1 to 2, the eyeglasses-wearing condition detection method of embodiment of the present invention is used for electronic installation
1000.Eyeglasses-wearing condition detection method includes:
S101, the active user to using terminal equipment are projected, and obtain the face 3D models of active user.
S102, face 3D models are analyzed, extract the fisrt feature point information of ocular in face 3D models.
S103, fisrt feature point information and the second feature point information of ocular under the wearing spectacles state that prestores entered
Row compares, and obtains the similarity under ocular and wearing spectacles state between ocular in face 3D models.
S104, according to similarity determine whether active user wears glasses.
Referring to Fig. 3, the eyeglasses-wearing condition detection method of embodiment of the present invention can be by embodiment of the present invention
Eyeglasses-wearing condition checkout gear 100 is realized.The eyeglasses-wearing condition checkout gear 100 of embodiment of the present invention fills for electronics
Put 1000.Eyeglasses-wearing condition checkout gear 100 includes depth image acquisition component 12 and processor 20.In the present embodiment, step
Rapid 102, step 103, step 104 can be realized by processor 20, and step 101 can be realized by depth image acquisition component 12.
In addition, eyeglasses-wearing condition checkout gear 100 can also include visible image capturing first 11, for shooting the field where active user
The picture of scape, so as to the specific orientation where going out active user according to picture recognition.
That is, depth image acquisition component 12 is used to be projected to the active user of using terminal equipment, obtain
The face 3D models of active user;Processor 20 is used to analyze face 3D models, extracts eye area in face 3D models
The fisrt feature point information in domain;By the second feature point of ocular under fisrt feature point information and the wearing spectacles state to prestore
Information is compared, and obtains the similarity under ocular and wearing spectacles state between ocular in face 3D models;Root
Determine whether active user wears glasses according to similarity.
In the present embodiment, depth image acquisition component 12 is projected in the active user to using terminal equipment, is obtained
Before the face 3D models of active user, processor 20 can be also used for judging whether to receive on & off switch wake-up signal or lift
Play wake-up signal.Corresponding, depth image acquisition component 12 is used for, and is receiving on & off switch wake-up signal or is lifting wake-up letter
Number when, projected to the active user of using terminal equipment, obtain the face 3D models of active user.
In the present embodiment, when active user wants using terminal equipment, active user typically by press home keys or
Person lifts terminal device so that the screen of terminal device lights, so that active user can carry out various operations, such as when checking
Between, to terminal device be unlocked using application etc..Therefore, called out when processor 20 receives on & off switch wake-up signal or lifted
During awake signal, represent that active user begins to use terminal device, therefore processor 20 can be with controlling depth image collection assembly 12
Projected to the active user of using terminal equipment, the face 3D models of active user are obtained, so as to according to active user's
Face 3D models determine whether active user wears glasses, and then whether wear glasses according to active user, to current
User takes corresponding eyeshield measure etc..
, it is necessary to explanation in the present embodiment, before step 101, described eyeglasses-wearing condition detection method can be with
Including:Projected to terminal user, obtain the face 3D models under terminal user's wearing spectacles state;To wearing
Face 3D models under glasses state are analyzed, and extract second of ocular in the face 3D models under wearing spectacles state
Characteristic point information;Store second feature point information.
That is, before step 101, processor 20 can be also used for, and be projected to terminal user, obtain eventually
Face 3D models under end equipment user's wearing spectacles state;Face 3D models under wearing spectacles state are analyzed, carried
Take the second feature point information of ocular in the face 3D models under wearing spectacles state;Store second feature point information.
In the present embodiment, due to the using terminal equipment in the case of wearing spectacles, relative to the feelings in non-wearing spectacles
Under condition for using terminal equipment, easily cause the eye strain of user, eyesight reduction etc., therefore, it is necessary to according to using terminal
Whether the active user of equipment wears glasses, to take active user corresponding eyeshield measure.And judging active user
Whether wear glasses during, it is necessary to by the fisrt feature point information of ocular in the face 3D models of active user with
The second feature point information of ocular is compared under the wearing spectacles state to prestore, therefore, before step 101, processor
20 can be also used for, and obtain the second feature point of ocular in the face 3D models under terminal user's wearing spectacles state
Information is simultaneously stored.
In addition, it is necessary to explanation, in order to save testing process, improves detection efficiency, can be prestored in terminal device
The second feature point information of ocular in face 3D models under multiple user's wearing spectacles states, for example, do not have to age bracket,
Different sexes, different shapes of face multiple user's wearing spectacles states under face 3D models in ocular second feature point letter
Breath, so that terminal user is according to eye in the face 3D models under the one of user's wearing spectacles state of own characteristic selection
The second feature point information in portion region is as characteristic information to be compared, or by processor 20 according to the characteristics of terminal user
Select the second feature point information of ocular in the face 3D models under one of user's wearing spectacles state to be used as to wait to compare
To characteristic information.
The eyeglasses-wearing condition checkout gear 100 of embodiment of the present invention can apply to the electronics of embodiment of the present invention
Device 1000.In other words, the electronic installation 1000 of embodiment of the present invention includes the eyeglasses-wearing shape of embodiment of the present invention
State detection means 100.
In some embodiments, electronic installation 1000 includes mobile phone, tablet personal computer, notebook computer, Intelligent bracelet, intelligence
Energy wrist-watch, intelligent helmet, intelligent glasses etc..
The eyeglasses-wearing condition detection method of embodiment of the present invention, by being carried out to the active user of using terminal equipment
Projection, obtain the face 3D models of active user;Face 3D models are analyzed, extract ocular in face 3D models
Fisrt feature point information;By the second feature point information of ocular under fisrt feature point information and the wearing spectacles state to prestore
It is compared, obtains the similarity under ocular and wearing spectacles state between ocular in face 3D models;According to phase
Determine whether active user wears glasses like degree, so as to the face 3D models by gathering active user, extract face
The characteristic information of ocular in 3D models, and then determine whether active user wears glasses, improve eyeglasses-wearing state
Detection efficiency and the degree of accuracy.
Referring to Fig. 4, in some embodiments, step 101 specifically may comprise steps of:
S1011, to active user's projective structure light.
The structure light image that S1012, shooting are modulated through active user.
S1013, demodulation structure light image each pixel corresponding to phase information to obtain the face depth of active user
Image.
S1014, face depth image and structure light image according to active user, generate the face 3D moulds of active user
Type.
Referring again to Fig. 3, in some embodiments, depth image acquisition component 12 includes the He of structured light projector 121
Structure light video camera head 122.Step 1011 can realize by structured light projector 121, step 1012, step 1013 and step 1014
It can be realized by structure light video camera head 122.
In other words, structured light projector 121 can be used for active user's projective structure light;Structure light video camera head 122 can
For shooting the structure light image modulated through active user;Phase information corresponding to each pixel of demodulation structure light image with
To the face depth image of active user;According to the face depth image and structure light image of active user, active user is generated
Face 3D models.
Specifically, structured light projector 121 is by the face or body of the project structured light of certain pattern to active user
Afterwards, the structure light image after being modulated by active user can be formed in the face of active user or the surface of body.Structure light images
Structure light image after first 122 shooting is modulated, then structure light image is demodulated to obtain depth image, and then obtain people
Face depth image;With reference to face depth image and the structure light image at face position, the face 3D models of active user are generated.
Wherein, the pattern of structure light can be laser stripe, Gray code, sine streak, non-homogeneous speckle etc..
Referring to Fig. 5, in some embodiments, phase corresponding to each pixel of step 1013 demodulation structure light image
Information specifically may comprise steps of with obtaining the process of the face depth image of active user:
S10131, demodulation structure light image each pixel corresponding to phase information.
S10132, phase information is converted into depth information.
S10133, according to depth information generate face depth image.
Referring again to Fig. 2, in some embodiments, step 10131, step 10132 and step 10133 can be by tying
Structure light video camera head 122 is realized.
In other words, structure light video camera head 122 can be further used for phase corresponding to each pixel of demodulation structure light image
Position information;Phase information is converted into depth information;Face depth image is generated with according to depth information.
Specifically, compared with non-modulated structure light, the phase information of the structure light after modulation is changed, and is being tied
The structure light showed in structure light image is to generate the structure light after distortion, wherein, the phase information of change can characterize
The depth information of object.Therefore, structure light video camera head 122 demodulates phase corresponding to each pixel in structure light image and believed first
Breath, calculates depth information, so as to obtain final face depth image further according to phase information.
In order that those skilled in the art be more apparent from according to structure light come gather active user face or
The process of the depth image of body, illustrated below by taking a kind of widely used optical grating projection technology (fringe projection technology) as an example
Its concrete principle.Wherein, optical grating projection technology belongs to sensu lato area-structure light.
As shown in Fig. 6 (a), when being projected using area-structure light, sine streak is produced by computer programming first,
And sine streak is projected to measured object by structured light projector 121, recycle structure light video camera head 122 to shoot striped by thing
Degree of crook after body modulation, then demodulates the curved stripes and obtains phase, then phase is converted into depth information to obtain
Depth image.The problem of to avoid producing error or error coupler, needed before carrying out depth information collection using structure light to depth
Image collection assembly 12 carries out parameter calibration, and demarcation includes geometric parameter (for example, structure light video camera head 122 and project structured light
Relative position parameter between device 121 etc.) demarcation, the inner parameter and structured light projector 121 of structure light video camera head 122
The demarcation of inner parameter etc..
Specifically, the first step, computer programming produce sine streak.Need to obtain using the striped of distortion due to follow-up
Phase, for example phase is obtained using four step phase-shifting methods, therefore produce four width phase differences here and beStriped, then structure light throw
Emitter 121 projects the four spokes line timesharing on measured object (mask shown in Fig. 6 (a)), and structure light video camera head 122 collects
Such as the figure on Fig. 6 (b) left sides, while to read the striped of the plane of reference shown on the right of Fig. 6 (b).
Second step, carry out phase recovery.The bar graph that structure light video camera head 122 is modulated according to four width collected is (i.e.
Structure light image) to calculate the phase diagram by phase modulation, now obtained be to block phase diagram.Because four step Phase-shifting algorithms obtain
Result be that gained is calculated by arctan function, therefore the phase after structure light modulation is limited between [- π, π], that is,
Say, the phase after modulation exceedes [- π, π], and it can restart again.Shown in the phase main value such as Fig. 6 (c) finally given.
Wherein, it is necessary to carry out the saltus step processing that disappears, it is continuous phase that will block phase recovery during phase recovery is carried out
Position.As shown in Fig. 6 (d), the left side is the continuous phase bitmap modulated, and the right is to refer to continuous phase bitmap.
3rd step, subtract each other to obtain phase difference (i.e. phase information) by the continuous phase modulated and with reference to continuous phase, should
Phase difference characterizes depth information of the measured object with respect to the plane of reference, then phase difference is substituted into the conversion formula (public affairs of phase and depth
The parameter being related in formula is by demarcation), you can obtain the threedimensional model of the object under test as shown in Fig. 6 (e).
It should be appreciated that in actual applications, according to the difference of concrete application scene, employed in the embodiment of the present invention
Structure light in addition to above-mentioned grating, can also be other arbitrary graphic patterns.
As a kind of possible implementation, the depth information of pattern light progress active user also can be used in the present invention
Collection.
Specifically, the method that pattern light obtains depth information is that this spreads out using a diffraction element for being essentially flat board
The relief diffraction structure that there are element particular phases to be distributed is penetrated, cross section is with two or more concavo-convex step embossment knots
Structure.Substantially 1 micron of the thickness of substrate in diffraction element, each step it is highly non-uniform, the span of height can be 0.7
Micron~0.9 micron.Structure shown in Fig. 7 (a) is the local diffraction structure of the collimation beam splitting element of the present embodiment.Fig. 7 (b) is edge
The unit of the cross sectional side view of section A-A, abscissa and ordinate is micron.The speckle pattern of pattern photogenerated has
The randomness of height, and can with the difference of distance changing patterns.Therefore, depth information is being obtained using pattern light
Before, it is necessary first to the speckle pattern in space is calibrated, for example, in the range of 0~4 meter of distance structure light video camera head 122,
A reference planes are taken every 1 centimetre, then just save 400 width speckle images after demarcating, the spacing of demarcation is smaller, obtains
Depth information precision it is higher.Then, structured light projector 121 is by pattern light projection to measured object (i.e. active user)
On, the speckle pattern that the difference in height on measured object surface to project the pattern light on measured object changes.Structure light
Camera 122 is shot project speckle pattern (i.e. structure light image) on measured object after, then by speckle pattern and demarcation early stage
The 400 width speckle images preserved afterwards carry out computing cross-correlation one by one, and then obtain 400 width correlation chart pictures.Measured object in space
Position where body can show peak value on correlation chart picture, above-mentioned peak value is superimposed and after interpolation arithmetic i.e.
It can obtain the depth information of measured object.
Most diffraction lights are obtained after diffraction is carried out to light beam due to common diffraction element, but per beam diffraction light light intensity difference
Greatly, it is also big to the risk of human eye injury.Re-diffraction even is carried out to diffraction light, the uniformity of obtained light beam is relatively low.
Therefore, the effect projected using the light beam of common diffraction element diffraction to measured object is poor.Using collimation in the present embodiment
Beam splitting element, the element not only have the function that to collimate uncollimated rays, also have the function that light splitting, i.e., through speculum
The non-collimated light of reflection is emitted multi-beam collimation light beam, and the multi-beam collimation being emitted after collimating beam splitting element toward different angles
The area of section approximately equal of light beam, flux of energy approximately equal, and then to carry out using the scatterplot light after the beam diffraction
The effect of projection is more preferable.Meanwhile laser emitting light is dispersed to every light beam, the risk of injury human eye is reduce further, and dissipate
Spot structure light is for other uniform structure lights of arrangement, when reaching same collection effect, the consumption of pattern light
Electricity is lower.
Referring to Fig. 8, in some embodiments, after step 104, described method can also comprise the following steps:
S105, when it is determined that active user wears glasses, active user is monitored using the first eyeshield pattern, obtained
Take the usage time of active user's using terminal equipment, when in use between when being more than the first usage time threshold value, locking terminal is set
It is standby;First eyeshield pattern includes:First usage time threshold value.
S106, when it is determined that active user does not wear glasses, active user is monitored using the second eyeshield pattern;
Obtain the usage time of active user's using terminal equipment, when in use between when being more than the second usage time threshold value, locking terminal
Equipment;Second eyeshield pattern includes:Second usage time threshold value;Wherein, when the first usage time threshold value is less than the second use
Between threshold value.
Referring again to Fig. 2, in some embodiments, step 105 and step 106 can be realized by processor 20.
In other words, processor 20 can be further used for when it is determined that active user wears glasses, using the first eyeshield
Pattern is monitored to active user, obtain active user's using terminal equipment usage time, when in use between be more than first
During usage time threshold value, locking terminal equipment;First eyeshield pattern includes:First usage time threshold value;It is determined that current use
When glasses are not worn at family, active user is monitored using the second eyeshield pattern;Obtain active user's using terminal equipment
Usage time, when in use between when being more than the second usage time threshold value, locking terminal equipment;Second eyeshield pattern includes:
Second usage time threshold value;Wherein, the first usage time threshold value is less than the second usage time threshold value.
Specifically, processor 20 can obtain the in the first eyeshield pattern when it is determined that active user wears glasses
One usage time threshold value, and calculate the usage time of active user's using terminal equipment;When being more than the first use between when in use
Between threshold value when, it is determined that the eyes of the active user with glasses are in fatigue state, so as to locking terminal equipment, avoid current
User is continuing with terminal device, is necessarily rested so as to the eyes of active user, reduces fatigue.And it is determined that active user
When not wearing glasses, the second usage time threshold value in the second eyeshield pattern is obtained, and calculate active user's using terminal and set
Standby usage time;When being more than the second usage time threshold value between when in use, it is determined that not wearing the eye of the active user of glasses
Mirror is in fatigue state, so as to locking terminal equipment, avoids active user from being continuing with terminal device.
It should be noted that processor 20 can be with the locking time length of setting terminal equipment, if in active user again
When being waken up to terminal device, actual locking time does not meet locking time length also, then does not control the screen of terminal device
Curtain carries out lighting operation;If when active user wakes up to terminal device again, actual locking time has met to lock
Time span, then the screen of terminal device can be controlled to carry out lighting operation.
Also referring to Fig. 3 and Fig. 9, embodiment of the present invention also proposes a kind of electronic installation 1000.Electronic installation 1000
Including eyeglasses-wearing condition checkout gear 100.Eyeglasses-wearing condition checkout gear 100 can utilize hardware and/or software to realize.
Eyeglasses-wearing condition checkout gear 100 includes imaging device 10 and processor 20.
Imaging device 10 includes visible image capturing first 11 and depth image acquisition component 12.
Specifically, it is seen that light video camera head 11 includes imaging sensor 111 and lens 112.Wherein, imaging sensor 111 wraps
Color filter lens array (such as Bayer filter arrays) is included, the number of lens 112 can be one or more.In imaging sensor 111
Each imaging pixel senses luminous intensity and wavelength information in photographed scene, generates one group of raw image data;Image
Sensor 111 sends this group of raw image data into processor 20, and processor 20 carries out denoising to raw image data, inserted
The image of colour is obtained after the computings such as value;Processor 20 can be in various formats to each image pixel in raw image data
Handle one by one, for example, each image pixel there can be a bit depth of 8,10,12 or 14 bits, processor 20 can be by identical or not
Same bit depth is handled each image pixel.
Depth image acquisition component 12 includes structured light projector 121 and structure light video camera head 122, depth image collection group
The depth information that part 12 can be used for catching active user is to obtain face depth image.Structured light projector 121 is used for structure
Light projection to active user, wherein, structured light patterns can be laser stripe, Gray code, sine streak or random alignment
Speckle pattern etc..Structure light video camera head 122 includes imaging sensor 1221 and lens 1222, and the number of lens 1222 can be one
It is or multiple.Imaging sensor 1221 is used for the structure light image that capturing structure light projector 121 is projected on active user.Structure
Light image can be sent by depth acquisition component 12 to processor 20 be demodulated, the processing such as phase recovery, phase information calculate with
The depth information of active user is obtained, and then obtains the face depth image and face 3D models of active user.
In some embodiments, it is seen that the function of light video camera head 11 and structure light video camera head 122 can be by a camera
Realize, in other words, imaging device 10 only includes a camera and a structured light projector 121, and above-mentioned camera is not only
Original image can be shot, can also shoot structure light image.
Except using structure light obtain face depth image in addition to, can also by binocular vision method, based on differential time of flight
(Time of Flight, TOF) even depth obtains the face depth image of active user as acquisition methods.
Processor 20 is further used for analyzing face 3D models, extracts first of ocular in face 3D models
Characteristic point information;The second feature point information of ocular under fisrt feature point information and the wearing spectacles state to prestore is carried out
Compare, obtain the similarity under ocular and wearing spectacles state between ocular in face 3D models;According to similarity
Determine whether active user wears glasses.
In addition, eyeglasses-wearing condition checkout gear 100 also includes video memory 30.Video memory 30 can be embedded in electricity
In sub-device 1000 or independently of the memory outside electronic installation 1000, and it may include direct memory access (DMA)
(Direct Memory Access, DMA) feature.The view data or depth image acquisition component of first 11 collection of visible image capturing
The structure light image related data of 12 collections, which can transmit, to be stored or is cached into video memory 30.Processor 20 can be from
Raw image data is read in video memory 30, also structure light image related data can be read from video memory 30 to enter
Row processing obtains depth image.In addition, view data and depth image are also storable in video memory 30, device for processing
20 calling processing at any time.
Eyeglasses-wearing condition checkout gear 100 may also include display 50.Display 50 can directly display active user's
Face 3D models check for user, or by graphics engine or graphics processor (Graphics Processing Unit,
GPU) it is further processed.Eyeglasses-wearing condition checkout gear 100 also includes encoder/decoder 60, encoder/decoding
Device 60 can encoding and decoding depth image etc. view data, the view data of coding can be stored in video memory 30, and can
To be shown before image is shown on display 50 by decoder decompresses.Encoder/decoder 60 can be by center
Processor (Central Processing Unit, CPU), GPU or coprocessor are realized.In other words, encoder/decoder 60
Can be in central processing unit (Central Processing Unit, CPU), GPU and coprocessor any one or it is more
Kind.
Eyeglasses-wearing condition checkout gear 100 also includes control logic device 40.Imaging device 10 is in imaging, processor 20
It can be analyzed according to the data that imaging device obtains to determine one or more control parameters of imaging device 10 (for example, exposing
Between light time etc.) image statistics.Processor 20 sends image statistics to control logic device 40, control logic device 40
Control imaging device 10 is imaged with the control parameter determined.Control logic device 40 may include to perform one or more routines
The processor and/or microcontroller of (such as firmware).One or more routines can determine to be imaged according to the image statistics of reception
The control parameter of equipment 10.
Referring to Fig. 10, the electronic installation 1000 of embodiment of the present invention includes one or more processors 200, memory
300 and one or more programs 310.Wherein one or more programs 310 are stored in memory 300, and are configured to
Performed by one or more processors 200.Program 310 includes being used for the eyeglasses-wearing shape for performing above-mentioned any one embodiment
The instruction of state detection method.
For example, program 310 includes being used for the instruction for performing the eyeglasses-wearing condition detection method described in following steps:
Projected to the active user of using terminal equipment, obtain the face 3D models of the active user;
The face 3D models are analyzed, extract the fisrt feature point letter of ocular in the face 3D models
Breath;
The fisrt feature point information and the second feature point information of ocular under the wearing spectacles state that prestores are entered
Row compares, and obtains the similarity under ocular and wearing spectacles state between ocular in the face 3D models;
Determine whether the active user wears glasses according to the similarity.
For another example program 310 also includes being used for the instruction for performing the eyeglasses-wearing condition detection method described in following steps:
To active user's projective structure light;
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the face of active user depth
Spend image;
According to the face depth image of the active user and the structure light image, the face of the active user is generated
3D models.
The computer-readable recording medium of embodiment of the present invention includes being combined with the electronic installation 1000 that can be imaged making
Computer program.Computer program can be performed by processor 200 to complete the glasses of above-mentioned any one embodiment pendant
Wear condition detection method.
For example, computer program can be performed by processor 200 to complete the eyeglasses-wearing state-detection described in following steps
Method:
Demodulate phase information corresponding to each pixel of the structure light image;
The phase information is converted into depth information;With
The face depth image is generated according to the depth information.
For another example computer program can be also performed by processor 200 to complete the eyeglasses-wearing state described in following steps
Detection method:
When it is determined that the active user wears glasses, the active user is supervised using the first eyeshield pattern
Control, the usage time that the active user uses the terminal device is obtained, is more than the first usage time in the usage time
During threshold value, the terminal device is locked;The first eyeshield pattern includes:First usage time threshold value;
When it is determined that the active user does not wear glasses, the active user is supervised using the second eyeshield pattern
Control;The usage time that the active user uses the terminal device is obtained, is more than the second usage time in the usage time
During threshold value, the terminal device is locked;The second eyeshield pattern includes:Second usage time threshold value;
The first usage time threshold value is less than the second usage time threshold value.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description
Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not
Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office
Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area
Art personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specification
Close and combine.
In addition, term " first ", " second " are only used for describing purpose, and it is not intended that instruction or hint relative importance
Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the invention, " multiple " are meant that at least two, such as two, three
It is individual etc., unless otherwise specifically defined.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include
Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize specific logical function or process
Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable
Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system including the system of processor or other can be held from instruction
The system of row system, device or equipment instruction fetch and execute instruction) use, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass
Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment
Put.The more specifically example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring
Connecting portion (electronic installation), portable computer diskette box (magnetic device), random access memory (RAM), read-only storage
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device, and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium, which can even is that, to print the paper of described program thereon or other are suitable
Medium, because can then enter edlin, interpretation or if necessary with it for example by carrying out optical scanner to paper or other media
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned
In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage
Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware
Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal
Discrete logic, have suitable combinational logic gate circuit application specific integrated circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries
Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium
In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also
That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould
Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..Although have been shown and retouch above
Embodiments of the invention are stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention
System, one of ordinary skill in the art can be changed to above-described embodiment, change, replace and become within the scope of the invention
Type.
Claims (14)
- A kind of 1. eyeglasses-wearing condition detection method, it is characterised in that including:Projected to the active user of using terminal equipment, obtain the face 3D models of the active user;The face 3D models are analyzed, extract the fisrt feature point information of ocular in the face 3D models;The fisrt feature point information and the second feature point information of ocular under the wearing spectacles state that prestores are compared It is right, obtain the similarity under ocular and wearing spectacles state between ocular in the face 3D models;Determine whether the active user wears glasses according to the similarity.
- 2. according to the method for claim 1, it is characterised in that it is described to be projected to the active user of using terminal, obtain The face 3D models of the active user are taken, including:To active user's projective structure light;The structure light image that shooting is modulated through the active user;WithPhase information corresponding to each pixel of the structure light image is demodulated to obtain the face depth map of the active user Picture;According to the face depth image of the active user and the structure light image, the face 3D moulds of the active user are generated Type.
- 3. according to the method for claim 2, it is characterised in that each pixel of the demodulation structure light image is corresponding Phase information to obtain the face depth image of the active user, including:Demodulate phase information corresponding to each pixel of the structure light image;The phase information is converted into depth information;WithThe face depth image is generated according to the depth information.
- 4. according to the method for claim 1, it is characterised in that it is described to be projected to the active user of using terminal, obtain Before taking the face 3D models of the active user, in addition to:Projected to terminal user, obtain the face 3D models under terminal user's wearing spectacles state;Face 3D models under the wearing spectacles state are analyzed, extract the face 3D moulds under the wearing spectacles state The second feature point information of ocular in type;Store the second feature point information.
- 5. according to the method for claim 1, it is characterised in that also include:When it is determined that the active user wears glasses, the active user is monitored using the first eyeshield pattern, obtained Take the active user to use the usage time of the terminal device, be more than the first usage time threshold value in the usage time When, lock the terminal device;The first eyeshield pattern includes:First usage time threshold value;When it is determined that the active user does not wear glasses, the active user is monitored using the second eyeshield pattern; The usage time that the active user uses the terminal device is obtained, is more than the second usage time threshold value in the usage time When, lock the terminal device;The second eyeshield pattern includes:Second usage time threshold value;The first usage time threshold value is less than the second usage time threshold value.
- 6. according to the method for claim 1, it is characterised in that the active user to using terminal equipment is thrown Shadow, before the face 3D models for obtaining the active user, in addition to:Judge whether to receive on & off switch wake-up signal or lift wake-up signal;The active user to using terminal equipment is projected, and obtains the face 3D models of the active user, including:If receive the on & off switch wake-up signal or it is described lift wake-up signal, to the active user of using terminal equipment Projected, obtain the face 3D models of the active user.
- A kind of 7. eyeglasses-wearing condition checkout gear, it is characterised in that including:Depth image acquisition component, the depth image acquisition component are used to be thrown to the active user of using terminal equipment Shadow, obtain the face 3D models of the active user;Processor, the processor, for analyzing the face 3D models, extract eye area in the face 3D models The fisrt feature point information in domain;The fisrt feature point information and the second feature point information of ocular under the wearing spectacles state that prestores are compared It is right, obtain the similarity under ocular and wearing spectacles state between ocular in the face 3D models;Determine whether the active user wears glasses according to the similarity.
- 8. device according to claim 7, it is characterised in that the depth image acquisition component includes structured light projector With structure light video camera head, the structured light projector is used for active user's projective structure light;The structure light video camera head is used for,The structure light image that shooting is modulated through the active user;WithPhase information corresponding to each pixel of the structure light image is demodulated to obtain the face depth map of the active user Picture;According to the face depth image of the active user and the structure light image, the face 3D moulds of the active user are generated Type.
- 9. device according to claim 8, it is characterised in that the structure light video camera head is additionally operable to,Demodulate phase information corresponding to each pixel of the structure light image;The phase information is converted into depth information;WithThe face depth image is generated according to the depth information.
- 10. device according to claim 7, it is characterised in that the depth image acquisition component is additionally operable to, and is set to terminal Standby user is projected, and obtains the face 3D models under terminal user's wearing spectacles state;The processor is additionally operable to, and the face 3D models under the wearing spectacles state are analyzed, extract the wearing eye The second feature point information of ocular in face 3D models under specular state;Store the second feature point information.
- 11. device according to claim 7, it is characterised in that the processor is additionally operable to,When it is determined that the active user wears glasses, the active user is monitored using the first eyeshield pattern, obtained Take the active user to use the usage time of the terminal device, be more than the first usage time threshold value in the usage time When, lock the terminal device;The first eyeshield pattern includes:First usage time threshold value;When it is determined that the active user does not wear glasses, the active user is monitored using the second eyeshield pattern; The usage time that the active user uses the terminal device is obtained, is more than the second usage time threshold value in the usage time When, lock the terminal device;The second eyeshield pattern includes:Second usage time threshold value;The first usage time threshold value is less than the second usage time threshold value.
- 12. device according to claim 7, it is characterised in that the processor is additionally operable to, and judges whether to receive switch Key wake-up signal lifts wake-up signal;The depth image acquisition component is used for, and is receiving the on & off switch wake-up signal or described is lifting wake-up signal When, projected to the active user of using terminal equipment, obtain the face 3D models of the active user.
- 13. a kind of electronic installation, it is characterised in that the electronic installation includes:One or more processors;Memory;WithOne or more programs, wherein one or more of programs are stored in the memory, and be configured to by One or more of computing devices, the glasses that described program includes being used for described in perform claim 1 to 6 any one of requirement are worn Wear the instruction of condition detection method.
- A kind of 14. computer-readable recording medium, it is characterised in that the meter being used in combination including the electronic installation with that can image Calculation machine program, the computer program can be executed by processor to complete the eyeglasses-wearing described in claim 1 to 6 any one Condition detection method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711020161.7A CN107886053A (en) | 2017-10-27 | 2017-10-27 | Eyeglasses-wearing condition detection method, device and electronic installation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711020161.7A CN107886053A (en) | 2017-10-27 | 2017-10-27 | Eyeglasses-wearing condition detection method, device and electronic installation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107886053A true CN107886053A (en) | 2018-04-06 |
Family
ID=61782573
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711020161.7A Pending CN107886053A (en) | 2017-10-27 | 2017-10-27 | Eyeglasses-wearing condition detection method, device and electronic installation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107886053A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117282038A (en) * | 2023-11-22 | 2023-12-26 | 杭州般意科技有限公司 | Light source adjusting method and device for eye phototherapy device, terminal and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663810A (en) * | 2012-03-09 | 2012-09-12 | 北京航空航天大学 | Full-automatic modeling approach of three dimensional faces based on phase deviation scanning |
CN105095841A (en) * | 2014-05-22 | 2015-11-25 | 小米科技有限责任公司 | Method and device for generating eyeglasses |
CN106200925A (en) * | 2016-06-28 | 2016-12-07 | 广东欧珀移动通信有限公司 | The control method of mobile terminal, device and mobile terminal |
CN106257995A (en) * | 2016-07-25 | 2016-12-28 | 深圳大学 | A kind of light field three-D imaging method and system thereof |
-
2017
- 2017-10-27 CN CN201711020161.7A patent/CN107886053A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663810A (en) * | 2012-03-09 | 2012-09-12 | 北京航空航天大学 | Full-automatic modeling approach of three dimensional faces based on phase deviation scanning |
CN105095841A (en) * | 2014-05-22 | 2015-11-25 | 小米科技有限责任公司 | Method and device for generating eyeglasses |
CN106200925A (en) * | 2016-06-28 | 2016-12-07 | 广东欧珀移动通信有限公司 | The control method of mobile terminal, device and mobile terminal |
CN106257995A (en) * | 2016-07-25 | 2016-12-28 | 深圳大学 | A kind of light field three-D imaging method and system thereof |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117282038A (en) * | 2023-11-22 | 2023-12-26 | 杭州般意科技有限公司 | Light source adjusting method and device for eye phototherapy device, terminal and storage medium |
CN117282038B (en) * | 2023-11-22 | 2024-02-13 | 杭州般意科技有限公司 | Light source adjusting method and device for eye phototherapy device, terminal and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107682607B (en) | Image acquiring method, device, mobile terminal and storage medium | |
CN107797664A (en) | Content display method, device and electronic installation | |
CN107610077A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107807806A (en) | Display parameters method of adjustment, device and electronic installation | |
CN107742296A (en) | Dynamic image generation method and electronic installation | |
CN107623817B (en) | Video background processing method, device and mobile terminal | |
CN107895110A (en) | Unlocking method, device and the mobile terminal of terminal device | |
CN107707839A (en) | Image processing method and device | |
CN107707831A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107734267A (en) | Image processing method and device | |
CN107509045A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107707838A (en) | Image processing method and device | |
CN107707835A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107610078A (en) | Image processing method and device | |
CN108052813A (en) | Unlocking method, device and the mobile terminal of terminal device | |
CN107734264A (en) | Image processing method and device | |
CN107509043A (en) | Image processing method and device | |
CN107644440A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107705278A (en) | The adding method and terminal device of dynamic effect | |
CN107613383A (en) | Video volume adjusting method, device and electronic installation | |
CN107705277A (en) | Image processing method and device | |
CN107454336A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107610076A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107527335A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107705243A (en) | Image processing method and device, electronic installation and computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: Guangdong Opel Mobile Communications Co., Ltd. |
|
CB02 | Change of applicant information | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180406 |
|
RJ01 | Rejection of invention patent application after publication |