CN108491138A - A kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion - Google Patents

A kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion Download PDF

Info

Publication number
CN108491138A
CN108491138A CN201810039716.0A CN201810039716A CN108491138A CN 108491138 A CN108491138 A CN 108491138A CN 201810039716 A CN201810039716 A CN 201810039716A CN 108491138 A CN108491138 A CN 108491138A
Authority
CN
China
Prior art keywords
screen
mobile terminal
face
displacement
mobile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810039716.0A
Other languages
Chinese (zh)
Inventor
吕建明
黄伙贤
杨灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201810039716.0A priority Critical patent/CN108491138A/en
Publication of CN108491138A publication Critical patent/CN108491138A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion, more particularly to it is a kind of by merging multi-sensor data, mobile terminal display screen curtain is adaptively adjusted under movement environment, alleviates the method for user's visual fatigue.This method is first by acquiring acceleration transducer data and acquisition vision sensor face location data;Then, acceleration information processor calculates offset estimation value, and then, visual position data processor calculates mobile terminal with respect to face displacement;Fusion treatment device merges displacement estimated value and mobile terminal with respect to face displacement, and screen adjustment offset is calculated;Finally, screen fusion adjuster adjusts offset according to screen and is adjusted into Mobile state backplate curtain.By acquiring multi-sensor data in real time and handling, dynamic adjusts the position that mobile terminal screen shows content for the invention, to reduce the frequent focus variation of human eye, optimizes mobile interactive experience.

Description

A kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion
Technical field
The present invention relates to mobile computing and human-computer interaction technique field, and in particular to a kind of shifting based on Multi-sensor Fusion Dynamic screen self-adapting regulation method.
Background technology
People often browse news, sharing information and Entertainment when riding public transportation means by mobile phone.And During this, because body and hands chance is shaken with the vehicles, mobile terminal screen also can relative to face It is constantly be generated displacement.In the environment of this movement, in order to see that the content of screen, human eye need constantly to be focused It adjusts, is relatively also easy to produce visual fatigue, influences interactive experience.
Currently, existing screen adjustment method mainly dynamically adjusts screen by acquiring single-sensor data.The A kind of method is to calculate screen adjustment by acquiring acceleration transducer data (patent application publication CN104320545A) to mend The amount of repaying.Since the acceleration sensor accuracy that mobile phone embeds is not high, large error can be had by calculating screen display compensation rate.Second Kind method (license publication number CN103885593B) is to carry out Face datection by front camera to obtain mobile terminal Then the relative displacement of opposite face adjusts compensation rate with respect to face relative displacement using mobile terminal to calculate screen.Due to Camera real-time is not high, and this method, which carries out screen adjustment, can have larger delay.It would therefore be highly desirable to propose a kind of fusion acceleration Sensor and visual sensor (front camera) cooperate, and the advantage and disadvantage for the two kinds of sensors that are used, and improve screen The mobile screen adjustment method of the robustness of the accuracy of adjustment, real-time and system.
Invention content
In order to improve the mobile interactive experience in movement environment, the present invention merges acceleration transducer and visual sensor The data of (front camera) propose that a kind of method of adaptive adjustment screen content display location, the basic principle of method are Multi-sensor data is acquired in real time and is handled, and dynamic adjusts the position that mobile terminal screen shows content, to reduce the frequency of human eye Numerous zoom optimizes mobile interactive experience.
The purpose of the present invention can be reached by adopting the following technical scheme that:
A kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion, the mobile screen adaptively adjust Method includes the following steps:
S1, the acceleration A that three axis directions are obtained using the acceleration transducer that mobile terminal embedsX,AY,AZ, wherein AX, AY,AZMobile terminal is respectively represented in screen X, Y, the acceleration in Z-direction, unit m/s2
S2, acceleration information processor are to the data A of acquisitionX,AYAccumulation operation is carried out, mobile terminal offset estimation is obtained Value SA-X,SA-YAnd deposit into offset estimation value queue, wherein SA-X,SA-YMobile terminal is respectively represented on screen X, Y-direction Mobile terminal displacement;
S3, Face datection, the positional number of detection face in the picture are carried out to the realtime graphic that visual sensor takes According to (PX,PY), unit is pixel p ixel, wherein visual sensor is mobile terminal front camera;
S4, visual data processing device are to the position data (P that getsX,PY) handled, obtain mobile terminal counterpart The displacement S of faceV-X,SV-YAnd deposit into mobile terminal with respect in face displacement queue, SV-X,SV-YIt respectively represents in mobile terminal screen X, the displacement of mobile terminal in Y-direction with respect to face;
S5, according to method for self-adaption amalgamation, to the mobile terminal offset estimation value S in displacement estimated value queueA-X,SA-YWith Mobile terminal is with respect to the mobile terminal in face displacement queue with respect to face displacement SV-X,SV-YFusion treatment is carried out, screen is obtained Adjust offset SP-X,SP-YAnd it deposits into screen adjustment offset queue, SP-X,SP-YMobile terminal is respectively represented in screen X, Y Screen on direction adjusts offset, and unit is equipment independent pixel dip;
S6, screen fusion adjuster adjust the screen adjustment offset S in offset queue according to screenP-X,SP-YIt carries out The reversed adjustment of dynamic so that screen content X stablizes display in Y-direction.
Further, the step S2 processes are as follows:
S201, setting velocity estimation value VX=0, VY=0, setting offset estimation value SA-X=0, SA-Y=0, setting speed is flat Mean value V 'X=0, V 'Y=0, setting moving average S 'X=0, S 'Y=0;
S202, to each acceleration information A receivedXAnd AY, added up the estimated value for obtaining speed:
VX=VX+AX,
VY=VY+AY
S203, high-pass filtering is carried out to speed:
VX=VX-V′X,
VY=VY-V′Y
S204, renewal speed average value:
V′X=α V 'X+(1-α)VX,
V′Y=α V 'Y+(1-α)VY
Wherein, α is less than 1 normal number;
S205, to speed VXAnd VY, added up the estimated value for obtaining displacement:
SA-X=SA-X+VX,
SA-Y=SA-Y+VY
S206, high-pass filtering is carried out to displacement:
SA-X=SA-X+S′X,
SA-Y=SA-Y+S′Y
By mobile terminal offset estimation value SA-X,SA-YDeposit into offset estimation value queue;
S207, update moving average:
S′X=α S 'X+(1-α)SA-X,
S′Y=α S 'Y+(1-α)SA-Y
If S208, user are in reading model, S202 is gone to step, otherwise end step S2.
Further, the step S3 further includes:Pass through image binaryzation, enhancing picture contrast and face tracking skill Art improves Face datection accuracy, and carrying out recognition of face using the minimum resolution ratio that front camera is supported improves recognition of face Speed.
Further, the step S4 processes are as follows:
S401, setting Initial Face position P 'X=0, P 'Y=0, setting Initial Face identifier FirstFace=True;
S402, the face location data (P received every timeX,PY) when, read Initial Face identifier FirstFace's Value goes to step S403 if FirstFace values are True, if FirstFace values are False, goes to step S404;
S403, setting Initial Face position P 'X=PX,P′Y=PY, Initial Face FirstFace=False is set, is gone to Step S402;
S404, calculate on mobile terminal screen X, Y-direction mobile terminal with respect to face displacement SV-XAnd SV-Y
SV-X=PX-P′X,
SV-Y=-(PY-P′Y);
Displacement S of the mobile terminal with respect to faceV-X,SV-YDeposit into opposite face displacement queue in mobile terminal;
If S405, user are in reading model, step S402 is gone to, otherwise end step S4.
Further, the step S5 processes are as follows:
Offset estimation value S in S501, acquisition offset estimation value queueA-X,SA-Y
S502, correct amount d is superimposed on displacement estimated valuexAnd dy.Obtain screen adjustment offset SP-X,SP-Y
SP-X=SA-X+dx,
SP-Y=SA-Y+dy
Wherein, dxAnd dyInitialize installation is 0, and screen adjusts offset SP-X,SP-YDeposit into screen adjustment offset queue;
S503, displacement S of the mobile terminal with respect to the mobile terminal in face displacement queue with respect to face is receivedV-X,SV-Y
S504, mobile terminal is calculated with respect to face displacement SV-X,SV-YWith screen adjustment offset SP-X,SP-YDifference DELTA dx,Δdy
Δdx=SV-X-SP-X,
Δdy=SV-Y-SP-Y
S505, to difference DELTA dx,ΔdyIt carries out attenuation processing and obtains correct amount dx,dy
dx=Δ dx/m
dy=Δ dy/m
Δdx=Δ dx-dx
Δdy=Δ dy-dy
Wherein, m takes positive integer;
If S506, user are in reading model, S501 is gone to step, otherwise end step S5.
Further, α values are 0.94.
Further, m values are 10.
The present invention has the following advantages and effects with respect to the prior art:
1, the present invention is real by merging the collected data of acceleration transducer and visual sensor (front camera) institute When displacement of the estimation mobile phone relative to face.The spy of the advantage and disadvantage complementation of two kinds of sensors is adequately utilized in the fusion process Point:Acceleration transducer detection speed is fast, and visual sensor detection speed is slow;Acceleration transducer data accuracy is relatively low, And visual sensor data accuracy is higher.
2, the advantages of multisensor method for self-adaption amalgamation that the present invention puts forward can make full use of two kinds of sensors, association With cooperation, the robustness under various complex application contexts is improved.
3, the display location that the present invention passes through adjusting mobile terminal screen content so that the display of screen content tends to steady It is fixed, alleviate human eye vision fatigue.
Description of the drawings
Fig. 1 is a kind of flow of mobile screen self-adapting regulation method based on Multi-sensor Fusion disclosed by the invention Figure;
Fig. 2 is the schematic diagram of multisensor self-adapting regulation method in the present invention.
Specific implementation mode
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art The every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
Embodiment
A kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion is present embodiments provided, it is more by merging Sensing data adaptively adjusts mobile phone display screen curtain under movement environment, alleviates user's visual fatigue.It should be pointed out that Mobile terminal includes mobile phone, laptop, tablet computer and palm equipment for surfing the net, multimedia equipment, Streaming Media in the present invention Equipment, mobile internet device (MID, mobile internet device), wearable device or other kinds of terminal are set It is standby.
As shown in Fig. 1, of the invention to be as follows:
S1, the acceleration A that three axis directions are obtained using the acceleration transducer that mobile terminal embedsX,AY,AZ, they distinguish Mobile terminal is represented in screen X, Y, the acceleration in Z-direction, unit m/s2
S2, acceleration information processor are to the data A acquired in step S1X,AYAccumulation operation is carried out, mobile terminal is obtained Offset estimation value SA-X,SA-YAnd it deposits into offset estimation value queue.They respectively represent mobile terminal on screen X, Y-direction Mobile terminal displacement.
The cardinal principle of accumulation operation is to obtain speed by adding up to acceleration, then obtain displacement to speed is cumulative.By Insufficient in the precision for the acceleration information that mobile terminal embeds, there are errors, if using traditional quadratic integral operation, with The progress of integral, error then can constantly amplify.Therefore, the present invention is using the cumulative come approximate integration process of discrete data, simultaneously In order to reduce the cumulative deviation brought, smooth noise reduction is carried out using the method for high-pass filtering.Specific steps are as follows:
S201, setting velocity estimation value VX=0, VY=0, setting offset estimation value SA-X=0, SA-Y=0, setting speed is flat Mean value V 'X=0, V 'Y=0, setting moving average S 'X=0, S 'Y=0;
S202, to each acceleration information A receivedXAnd AY, added up the estimated value for obtaining speed:
VX=VX+AX,
VY=VY+AY
S203, high-pass filtering is carried out to speed:
VX=VX-V′X,
VY=VY-V′Y
S204, renewal speed average value:
V′X=α V 'X+(1-α)VX,
V′Y=α V 'Y+(1-α)VY
α is less than 1 normal number.Wherein, α can take 0.94.
S205, to speed VXAnd VY, added up the estimated value for obtaining displacement:
SA-X=SA-X+VX,
SA-Y=SA-Y+VY
S206, high-pass filtering is carried out to displacement:
SA-X=SA-X+S′X,
SA-Y=SA-Y+S′Y
By mobile terminal offset estimation value SA-X,SA-YDeposit into offset estimation value queue;
S207, update moving average:
S′X=α S 'X+(1-α)SA-X,
S′Y=α S 'Y+(1-α)SA-Y
Here α is less than 1 normal number.Preferably, α can take 0.94.
If S208, user are in reading model, S202 is gone to step, otherwise end step S2.
S3, Face datection, detection are carried out to the realtime graphic that visual sensor (mobile terminal front camera) takes Position (the P of face in the pictureX,PY), unit is pixel p ixel.
The factors such as background interference, illumination condition can affect to the accuracy of Face datection.For this problem, The present invention improves Face datection accuracy by image binaryzation, enhancing picture contrast and face tracking technology.Meanwhile in order to The speed for accelerating recognition of face carries out recognition of face using the minimum resolution ratio that front camera is supported.
Position data (the P that S4, visual data processing device get step S3X,PY) handled, obtain mobile terminal The displacement S of opposite faceV-X,SV-YAnd mobile terminal is deposited into respect in face displacement queue, they are respectively represented in mobile terminal Screen X, the displacement of mobile terminal in Y-direction with respect to face.Detailed data processing procedure is as follows:
S401, setting Initial Face position P 'X=0, P 'Y=0.Initial Face identifier FirstFace=True is set;
S402, the face location data (P received every timeX,PY) when, read Initial Face identifier FirstFace's Value,
If FirstFace values are True, S403 is gone to step,
If FirstFace values are False, S404 is gone to step;
S403, setting Initial Face position P 'X=PX,P′Y=PY, Initial Face FirstFace=False is set, is gone to Step S402;
S404, calculate on mobile terminal screen X, Y-direction mobile terminal with respect to face displacement SV-XAnd SV-Y
SV-X=PX-P′X,
SV-Y=-(PY-P′Y);
Displacement S of the mobile terminal with respect to faceV-X,SV-YOpposite face displacement queue in mobile terminal is deposited into,
If S405, user are in reading model, step S402 is gone to, otherwise end step S4.
S5, fusion treatment device are according to method for self-adaption amalgamation proposed by the present invention, in step S2 offset estimation value queues Mobile terminal offset estimation value SA-X,SA-YIt is opposite with respect to the mobile terminal in face displacement queue with mobile terminal in step S4 Face displacement SV-X,SV-YFusion treatment is carried out, screen adjustment offset S is obtainedP-X,SP-YAnd deposit into screen adjustment offset queue In.They respectively represent screen adjustment offset of the mobile terminal on screen X, Y-direction, and unit is equipment independent pixel dip.
Method for self-adaption amalgamation is core of the invention part in fusion treatment device, it can make full use of two kinds of sensors Respective feature.The present invention proposes the fusion adaptive feedback algorithm of multisensor, as shown in Fig. 2, including following key step:
Offset estimation value S in S501, obtaining step S2 offset estimation value queuesA-X,SA-Y.Due to acceleration transducer Acquisition time is shorter, which can be obtained by more high-frequency data acquisition.
S502, correct amount d is superimposed on displacement estimated valuexAnd dy.Obtain screen adjustment offset SP-X,SP-Y
SP-X=SA-X+dx,
SP-Y=SA-Y+dy
Wherein, dxAnd dyInitialize installation is 0, in subsequent steps, it will is constantly corrected.Screen adjustment offset Measure SP-X,SP-YDeposit into screen adjustment offset queue.
Displacement of the mobile terminal with respect to the mobile terminal in face displacement queue with respect to face in S503, receiving step S4 SV-X,SV-Y.Since the data acquisition time of camera is long, the frequency of the step gathered data is relatively low.
S504, mobile terminal is calculated with respect to face displacement SV-X,SV-YWith screen adjustment offset SP-X,SP-YDifference DELTA dx,Δdy
Δdx=SV-X-SP-X,
Δdy=SV-Y-SP-Y
S505, to difference DELTA dx,ΔdyIt carries out attenuation processing and obtains correct amount dx,dy
dx=Δ dx/m
dy=Δ dy/m
Δdx=Δ dx-dx
Δdy=Δ dy-dy
Here m takes positive integer, it is preferred that m can take 10.
If S506, user are in reading model, S501 is gone to step, otherwise end step S5.
Using the above method, two kinds of sensors can play respective advantage, and disadvantage is complementary.The advantage of acceleration transducer It is that real-time is good, to the movement sensitive of mobile terminal, disadvantage, which is the physical quantity observed, cannot directly reflect eyes and the shifting of people The variation of dynamic terminal screen relative displacement.And exactly visual sensor the advantages of be to directly reflect people eyes and movement The variation of terminal screen relative displacement, disadvantage are exactly not sensitive enough to motion of mobile terminals, and real-time is insufficient.Therefore, of the invention Method makes acceleration transducer mutually learn from other's strong points to offset one's weaknesses with visual sensor, cooperates.
S6, screen fusion adjuster are according to the screen adjustment offset S in screen adjustment offset queue in step S5P-X, SP-YIt is reversely adjusted into Mobile state so that screen content X stablizes display in Y-direction, and to reduce human eye frequent focus variation, optimization moves Dynamic interactive experience.
In conclusion the present embodiment is carried for the visual fatigue problem in movement environment when viewing mobile terminal screen Go out a kind of acceleration transducer by merging mobile terminal and the collected data of visual sensor (front camera) institute are dynamic The method that state adjusts mobile terminal screen content display location so that the content in movement environment in mobile terminal screen is shown Stablize relatively, to reduce the unnecessary zoom of human eye, achievees the purpose that alleviate visual fatigue, the mobile interactive experience of optimization.
The above embodiment is a preferred embodiment of the present invention, but embodiments of the present invention are not by above-described embodiment Limitation, it is other it is any without departing from the spirit and principles of the present invention made by changes, modifications, substitutions, combinations, simplifications, Equivalent substitute mode is should be, is included within the scope of the present invention.

Claims (7)

1. a kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion, which is characterized in that the mobile screen Self-adapting regulation method includes the following steps:
S1, the acceleration A that three axis directions are obtained using the acceleration transducer that mobile terminal embedsX,AY,AZ, wherein AX,AY,AZ Mobile terminal is respectively represented in screen X, Y, the acceleration in Z-direction, unit m/s2
S2, acceleration information processor are to the data A of acquisitionX,AYAccumulation operation is carried out, mobile terminal offset estimation value is obtained SA-X,SA-YAnd deposit into offset estimation value queue, wherein SA-X,SA-YRespectively represent shifting of the mobile terminal on screen X, Y-direction Dynamic terminal displacement;
S3, Face datection, the position data (P of detection face in the picture are carried out to the realtime graphic that visual sensor takesX, PY), unit is pixel p ixel, wherein visual sensor is mobile terminal front camera;
S4, visual data processing device are to the position data (P that getsX,PY) handled, mobile terminal is obtained with respect to face Displacement SV-X,SV-YAnd deposit into mobile terminal with respect in face displacement queue, SV-X,SV-YIt respectively represents in mobile terminal screen X, Y The displacement of mobile terminal on direction with respect to face;
S5, according to method for self-adaption amalgamation, to the mobile terminal offset estimation value S in displacement estimated value queueA-X,SA-YAnd movement Terminal is with respect to the mobile terminal in face displacement queue with respect to face displacement SV-X,SV-YFusion treatment is carried out, screen adjustment is obtained Offset SP-X,SP-YAnd it deposits into screen adjustment offset queue, SP-X,SP-YMobile terminal is respectively represented in screen X, Y-direction On screen adjust offset, unit be equipment independent pixel dip;
S6, screen fusion adjuster adjust the screen adjustment offset S in offset queue according to screenP-X,SP-YInto Mobile state Reversed adjustment so that screen content X stablizes display in Y-direction.
2. a kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion according to claim 1, feature It is, the step S2 processes are as follows:
S201, setting velocity estimation value VX=0, VY=0, setting offset estimation value SA-X=0, SA-Y=0, speed average is set V′X=0, V 'Y=0, setting moving average S 'X=0, S 'Y=0;
S202, to each acceleration information A receivedXAnd AY, added up the estimated value for obtaining speed:
VX=VX+AX,
VY=VY+AY
S203, high-pass filtering is carried out to speed:
VX=VX-V′X,
VY=VY-V′Y
S204, renewal speed average value:
V′X=α V 'X+(1-α)VX,
V′Y=α V 'Y+(1-α)VY
Wherein, α is less than 1 normal number;
S205, to speed VXAnd VY, added up the estimated value for obtaining displacement:
SA-X=SA-X+VX,
SA-Y=SA-Y+VY
S206, high-pass filtering is carried out to displacement:
SA-X=SA-X+S′X,
SA-Y=SA-Y+S′Y
By mobile terminal offset estimation value SA-X,SA-YDeposit into offset estimation value queue;
S207, update moving average:
S′X=α S 'X+(1-α)SA-X,
S′Y=α S 'Y+(1-α)SA-Y
If S208, user are in reading model, S202 is gone to step, otherwise end step S2.
3. a kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion according to claim 2, feature It is, the step S3 further includes:Face inspection is improved by image binaryzation, enhancing picture contrast and face tracking technology Accuracy is surveyed, carrying out recognition of face using the minimum resolution ratio that front camera is supported improves recognition of face speed.
4. a kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion according to claim 2, feature It is, the step S4 processes are as follows:
S401, setting Initial Face position P 'X=0, P 'Y=0, setting Initial Face identifier FirstFace=True;
S402, the face location data (P received every timeX,PY) when, the value of Initial Face identifier FirstFace is read, if FirstFace values are True, go to step S403, if FirstFace values are False, go to step S404;
S403, setting Initial Face position P 'X=PX,P′Y=PY, Initial Face FirstFace=False is set, step is gone to S402;
S404, calculate on mobile terminal screen X, Y-direction mobile terminal with respect to face displacement SV-XAnd SV-Y
SV-X=PX-P′X,
SV-Y=-(PY-P′Y);
Displacement S of the mobile terminal with respect to faceV-X,SV-YDeposit into opposite face displacement queue in mobile terminal;
If S405, user are in reading model, step S402 is gone to, otherwise end step S4.
5. a kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion according to claim 2, feature It is, the step S5 processes are as follows:
Offset estimation value S in S501, acquisition offset estimation value queueA-X,SA-Y
S502, correct amount d is superimposed on displacement estimated valuexAnd dy.Obtain screen adjustment offset SP-X,SP-Y
SP-X=SA-X+dx,
SP-Y=SA-Y+dy
Wherein, dxAnd dyInitialize installation is 0, and screen adjusts offset SP-X,SP-YDeposit into screen adjustment offset queue;
S503, displacement S of the mobile terminal with respect to the mobile terminal in face displacement queue with respect to face is receivedV-X,SV-Y
S504, mobile terminal is calculated with respect to face displacement SV-X,SV-YWith screen adjustment offset SP-X,SP-YDifference DELTA dx,Δdy
Δdx=SV-X-SP-X,
Δdy=SV-Y-SP-Y
S505, to difference DELTA dx,ΔdyIt carries out attenuation processing and obtains correct amount dx,dy
dx=Δ dx/m
dy=Δ dy/m
Δdx=Δ dx-dx
Δdy=Δ dy-dy
Wherein, m takes positive integer;
If S506, user are in reading model, S501 is gone to step, otherwise end step S5.
6. a kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion according to claim 2, feature It is, α values are 0.94.
7. a kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion according to claim 5, feature It is, m values are 10.
CN201810039716.0A 2018-01-16 2018-01-16 A kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion Pending CN108491138A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810039716.0A CN108491138A (en) 2018-01-16 2018-01-16 A kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810039716.0A CN108491138A (en) 2018-01-16 2018-01-16 A kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion

Publications (1)

Publication Number Publication Date
CN108491138A true CN108491138A (en) 2018-09-04

Family

ID=63344113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810039716.0A Pending CN108491138A (en) 2018-01-16 2018-01-16 A kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion

Country Status (1)

Country Link
CN (1) CN108491138A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110007760A (en) * 2019-03-28 2019-07-12 京东方科技集团股份有限公司 Display control method, display control unit and display device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013071674A1 (en) * 2011-11-18 2013-05-23 中兴通讯股份有限公司 Image display method and device for mobile terminal
CN103365430A (en) * 2012-04-10 2013-10-23 洪荣昭 Displacement compensation method of mobile device screen frame
CN103885593A (en) * 2014-03-14 2014-06-25 深圳市中兴移动通信有限公司 Handheld terminal and screen anti-shake method and device of handheld terminal
CN104394451A (en) * 2014-12-05 2015-03-04 宁波菊风系统软件有限公司 Video presenting method for intelligent mobile terminal
CN104461289A (en) * 2014-11-28 2015-03-25 广东欧珀移动通信有限公司 Terminal and screen display method and device for terminal
CN106648344A (en) * 2015-11-02 2017-05-10 重庆邮电大学 Screen content adjustment method and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013071674A1 (en) * 2011-11-18 2013-05-23 中兴通讯股份有限公司 Image display method and device for mobile terminal
CN103365430A (en) * 2012-04-10 2013-10-23 洪荣昭 Displacement compensation method of mobile device screen frame
CN103885593A (en) * 2014-03-14 2014-06-25 深圳市中兴移动通信有限公司 Handheld terminal and screen anti-shake method and device of handheld terminal
CN104461289A (en) * 2014-11-28 2015-03-25 广东欧珀移动通信有限公司 Terminal and screen display method and device for terminal
CN104394451A (en) * 2014-12-05 2015-03-04 宁波菊风系统软件有限公司 Video presenting method for intelligent mobile terminal
CN106648344A (en) * 2015-11-02 2017-05-10 重庆邮电大学 Screen content adjustment method and equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110007760A (en) * 2019-03-28 2019-07-12 京东方科技集团股份有限公司 Display control method, display control unit and display device

Similar Documents

Publication Publication Date Title
US11317024B2 (en) Electronic image stabilization frequency estimator
US9024876B2 (en) Absolute and relative positioning sensor fusion in an interactive display system
CN103763483A (en) Method and device for shaking resistance in photo taking process of mobile terminal and mobile terminal
US20150260750A1 (en) Electronic apparatus and program
CN111601032A (en) Shooting method and device and electronic equipment
CN109040525B (en) Image processing method, image processing device, computer readable medium and electronic equipment
JP2015535378A (en) Motion compensation in interactive display systems.
CN103197774A (en) Method and system for mapping application track of emission light source motion track
WO2017005070A1 (en) Display control method and device
TW201017477A (en) Locus smoothing method and navigation device using the same
WO2020075825A1 (en) Movement estimating device, electronic instrument, control program, and movement estimating method
CN108491138A (en) A kind of mobile screen self-adapting regulation method based on Multi-sensor Fusion
US11823359B2 (en) Systems and methods for leveling images
TWI518559B (en) Sub-frame accumulation method and apparatus for keeping reporting errors of an optical navigation sensor consistent across all frame rates
CN111126101A (en) Method and device for determining key point position, electronic equipment and storage medium
CN111614834B (en) Electronic device control method and device, electronic device and storage medium
CN112486326A (en) Position coordinate prediction method for gesture control movement, intelligent terminal and medium
CN109756728A (en) Image display method and apparatus, electronic equipment, computer readable storage medium
CN114840126B (en) Object control method, device, electronic equipment and storage medium
WO2023223704A1 (en) Information processing device, information processing method, and program
CN115690168A (en) Video stabilization method, device, equipment and computer readable storage medium
CN116263341A (en) IMU data correction method and system based on depth video technology
CN101621620B (en) Image reduction method for electronic device and relevant device thereof
EP1574993B1 (en) Method and system for estimating motion from a sequence of bidimensional arrays of sampled values, optical mouse including such a system and computer program product therefor
CN116958142A (en) Target detection and tracking method based on compound eye event imaging and high-speed turntable

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180904

WD01 Invention patent application deemed withdrawn after publication