WO2017005070A1 - 显示控制方法及装置 - Google Patents

显示控制方法及装置 Download PDF

Info

Publication number
WO2017005070A1
WO2017005070A1 PCT/CN2016/084586 CN2016084586W WO2017005070A1 WO 2017005070 A1 WO2017005070 A1 WO 2017005070A1 CN 2016084586 W CN2016084586 W CN 2016084586W WO 2017005070 A1 WO2017005070 A1 WO 2017005070A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
offset value
reference point
display
terminal
Prior art date
Application number
PCT/CN2016/084586
Other languages
English (en)
French (fr)
Inventor
葛君伟
王清玲
范张群
张玮玮
崔玉岩
Original Assignee
重庆邮电大学
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201510400254.7A external-priority patent/CN106339070B/zh
Priority claimed from CN201510433741.3A external-priority patent/CN106371552B/zh
Priority claimed from CN201510731860.7A external-priority patent/CN106648344B/zh
Application filed by 重庆邮电大学, 腾讯科技(深圳)有限公司 filed Critical 重庆邮电大学
Publication of WO2017005070A1 publication Critical patent/WO2017005070A1/zh
Priority to US15/854,633 priority Critical patent/US10635892B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/167Detection; Localisation; Normalisation using comparisons between temporally consecutive images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Definitions

  • the invention claims an invention patent application filed on July 22, 2015 at the Chinese Patent Office with the application number 201510433741.3 entitled “A Control Method and Apparatus for Media Display on a Mobile Terminal", on July 9, 2015
  • the present invention relates to the field of computer communication technologies, and in particular, to a display control method and apparatus.
  • portable terminals such as mobile phones, personal digital assistants (PDAs), tablets, and the like have been increasingly integrated into the daily lives of users.
  • PDAs personal digital assistants
  • users can communicate, photograph, view display content, and other entertainment activities such as games.
  • jitter In the process of using the terminal, jitter is often encountered. For example, there is a problem of jitter during the user's walking, or during traveling by a vehicle, etc.
  • the jitter referred to herein refers to the jitter of the displayed content relative to the user's eyes. This is because, when the user is in a non-stationary state, it is difficult to keep the handheld terminal device relatively stationary with the eyes, so that if the user watches the screen for a long time, dizziness and discomfort may occur.
  • the distance between the screen of the terminal and the face of the user changes, and therefore, the angle, direction, and the like of the content displayed on the screen change with the distance from the face, which affects The effect of content browsing.
  • the present invention has been made in view of the above problems in the related art.
  • the invention provides a display control method and device.
  • a display control method comprising the following operations:
  • a face image continuously, and extracting a face reference point included in the face image, wherein the face image includes: a first face image including at least one first face reference point and at least one second face reference The second face image of the point;
  • the display position offset value of the content displayed on the display screen is acquired based on the face position offset value, and display control is performed according to the display position offset value.
  • a display control apparatus comprising:
  • An image acquisition unit configured to continuously collect a face image including a face reference point, wherein the face image includes: a first face image and a second face image;
  • An extracting unit configured to extract a face reference point, wherein the face reference point includes: a first face reference point included in the first face image and a second face reference point included in the second face image ;
  • a first calculating unit configured to calculate a face position offset value according to the face reference point extracted by the extracting unit
  • a second calculating unit configured to calculate a display position offset value of the content displayed on the display screen according to the face position offset value
  • a display control unit is configured to perform display control according to the display position offset value.
  • another display control method comprising the following operations:
  • the content displayed on the display screen of the control terminal moves in the opposite direction to the relative displacement.
  • a display control apparatus comprising:
  • An obtaining unit configured to acquire mobile data of the terminal
  • a calculating unit configured to calculate a relative displacement between the terminal and the user according to the mobile data
  • the control unit is configured to control the content displayed on the display screen of the terminal to move in a direction opposite to the relative displacement.
  • the display of the terminal screen can be adjusted according to the movement of the face or the terminal, so that the relative movement between the terminal screen and the user is minimized or even relatively static. In turn, the discomfort caused by the jitter is weakened or eliminated.
  • FIG. 1 is a schematic flow chart showing a display control method according to a first embodiment of the present invention
  • FIG. 2 is a flow chart showing an example 1 according to a first embodiment of the present invention
  • FIG. 3 is a schematic diagram showing acquisition of spatial coordinates based on a face reference point
  • 4A is a flow chart showing an example 2 according to the first embodiment of the present invention.
  • 4B is a schematic flow chart showing a display control method according to a first embodiment of the present invention.
  • 5A-5C are schematic diagrams showing an application scenario of a display control method according to an embodiment of the present invention.
  • FIG. 6 is a block diagram showing the structure of a display control device according to an example of the second embodiment of the present invention.
  • FIG. 7 is a block diagram showing the structure of a display control device according to another example of the second embodiment of the present invention.
  • FIG. 8 is a flowchart of a display control method according to a third embodiment of the present invention.
  • FIG. 9 is a schematic view showing a display control method according to a third embodiment of the present invention.
  • FIG. 10 is a block diagram showing the architecture of a mobile phone for implementing an embodiment of the present invention.
  • Figure 11 is a block diagram showing a display control device according to a fourth embodiment of the present invention.
  • face image refers to a plurality of face images continuously acquired in a predetermined period, or a plurality of face images continuously acquired at predetermined time intervals; for convenience of description, in this document, The two adjacent face images in the collected plurality of face images are taken as an example; in the following description, as an example, the face image is collected by the front camera of the terminal device.
  • Face reference point refers to a point on a person's face that corresponds to a point in the image.
  • the feature point on the face of the person may include, but is not limited to, at least one of a user's nose, eyes, mouth, etc., or
  • the face reference point can also be set as the intersection between the line connecting the eyes and the nose and mouth on the face image, the focus point of one eye, or the focus point of the two eyes, and the like.
  • the "display control" or “display adjustment” mentioned herein includes, but is not limited to, one of the following or a combination thereof: a translation process, an enlargement process, a reduction process, and the like.
  • the terminal device according to the embodiment of the present invention may include, but is not limited to, a tablet device, a smart phone, a notebook computer, a PDA, a palmtop computer, a personal computer, and a mobile Internet device (MID), etc., and a terminal device having content data display; and Game machines, in-vehicle devices such as on-board computer displays, point of sales (POS), and other mobile devices of handheld devices.
  • the content displayed on the screen may include display contents such as text, pictures, images, media, and the like.
  • the technical solution provided by the embodiment of the present invention adjusts the display of the control screen according to the change of the position of the user's face or the relative position of the user and the terminal, thereby balancing or canceling the relative position change described above, thereby weakening or Eliminate bad screen viewing effects due to jitter.
  • an anti-shake function can be configured at the time of shipment.
  • the front camera of the terminal is turned on in response to the user's anti-shake function turn-on command. That is, when the user turns on the anti-shake function switch, the front camera of the terminal is turned on, and the anti-shake processing function is turned on. And, in response to the user's anti-shake function off command, the front camera of the terminal is turned off, and the anti-shake processing function is turned off.
  • the anti-shake processing can be performed when the user has anti-shake (ie, display control) requirements, and the anti-shake processing is stopped when the user does not have anti-shake demand. It can save system resources and reduce the resource occupancy rate of terminal devices.
  • the method of the embodiment of the present invention can also be implemented by a user downloading an APP that integrates the display control method according to an embodiment of the present invention. The invention is not limited thereto.
  • the first embodiment provides a display control method in which display of a screen is controlled according to a change in the position of a face of a user.
  • 1 is a flow chart of a display control method according to a first embodiment of the present invention. As shown in FIG. 1, the method includes the following operations S101-S105:
  • S101 Collect a face image continuously, and extract a face reference point included in the face image.
  • the face images herein are the first face image and the second face image, respectively, wherein the first face image may be the currently collected face image, and the second face image may be the current captured image.
  • the first face image may include at least one first face reference point
  • the second face image may include at least one second face reference point. The extraction of the reference point can be completed based on the face recognition technology to locate the face from the current face image.
  • S105 Acquire a display position offset value of the content displayed on the display screen based on the face position offset value, and perform display control according to the display position offset value.
  • the display position offset value is used to offset the content or the display interface displayed on the current screen in the same direction as the face reference point, so that the human eye does not feel the jitter of the display content with respect to the eyes.
  • step S101 two exemplary ways are provided herein, of course, the invention is not limited thereto.
  • Method 1 Collect in a predetermined period.
  • the predetermined period here may be a time according to a jitter band acquired by the built-in gyroscope, which may be a time for the user to walk one step, or may be a time of a jitter band of the vehicle during driving, in order to avoid a jitter band Misjudgment can be judged by adding two acceleration thresholds. For example, for the time when the user walks one step, two acceleration thresholds A and B can be preset, wherein A is smaller than B, and the user is walking through the built-in gyroscope.
  • the acceleration band is acquired, and the terminal device can select the time length corresponding to the following band in the acceleration band as the time for the user to travel one step: the acceleration data starts to be greater than B, and in the subsequent band change process, the acceleration data in the acceleration band starts to be smaller than the A band of the A band. .
  • Method 2 Collect at predetermined time intervals. For example, a face image is acquired every 0.05 second, 0.1 second, 0.2 second, 0.5 second, 1 second, or the like.
  • the predetermined time interval may be set according to actual needs, which is not specifically limited in the present invention.
  • the face position offset value may be an offset value in the plane coordinate, or may be an offset value in other set space coordinate systems such as a camera coordinate system.
  • the present invention has no limitation thereto, and the following will respectively combine examples. Describe. Among them, the first example: based on the camera coordinate system; example two: based on the plane coordinate system.
  • Example 1 Based on camera coordinate system
  • the operation of acquiring the face position offset value in step S103 can be implemented by the following S21-S25:
  • the image coordinates are coordinates of the face image in the plane coordinate system, and the first image coordinates corresponding to the first face reference point are the second image coordinates corresponding to the second face reference point;
  • the set space coordinate system may be a camera coordinate system, and the camera coordinate system is preset in the terminal device, and the optical axis center of the camera is taken as an origin;
  • the first face reference point (current face reference point) may be further obtained according to the time interval of the collected face image and the face position offset value.
  • the moving speed and/or acceleration information of the position offset of the face reference point (previous face reference point), and subsequently calculating the display position offset value the display position can be calculated according to the moving speed and/or acceleration information The speed is shifted, and display control is performed according to the display position shift speed.
  • the terminal can acquire at least two inputs from the outside before acquiring the face image.
  • Length data between face reference points can be input after manual measurement by the user, and can include the distance from the nose to the two eyes and the distance between the two eyes.
  • the acquisition of the spatial coordinates is achieved by the following operations:
  • FIG. 3 shows a schematic diagram of acquiring spatial coordinates based on a face reference point.
  • the case of collecting two face images is taken as an example.
  • two or more face images may be used, which is not limited in the present invention.
  • the face image includes three face reference points, and the corresponding spatial coordinates of the three face reference points in the camera coordinate system are P 1 , P 2 , and P 3 , respectively, and the terminal device can acquire two faces by using image processing technology.
  • the image coordinates corresponding to the face image reference point of each face image in the image are (u 1 , v 1 ), (u 2 , v 2 ), and (u 3 , v 3 ), respectively (ie, the acquired image coordinates described above) Operation), then, the normalized coordinates corresponding to the image coordinates (X 1c1 , Y 1c1 ), (X 1c2 , Y 1c2 ), and (X 1c3 , Y 1c3 ) can be obtained in the camera coordinate system by using the following normalization formula ) (ie, the above operation of obtaining normalized coordinates), the normalization formula is:
  • the terminal device may calculate the unit vector of the origin O to the face reference point in the camera coordinate system by using the above normalized coordinates, that is, the unit vector e 1 from the point O to the points P 1 , P 2 , and P 3 respectively,
  • the unit vector used is:
  • the terminal device obtains the angle between the unit vectors, and records that the angle between e 2 and e 3 is ⁇ , the angle between e 1 and e 3 is ⁇ , and the angle between e 1 and e 2 is ⁇ , then:
  • the terminal device can calculate the distance from the origin to the face reference point in the camera coordinate system according to the length data between the face reference points and the angle between the unit vectors, and assume that the O points to P 1 , P 2 , and P 3 The distances of the three points are d 1 , d 2 , and d 3 , respectively, and it is concluded that:
  • a is the length data between P 2 and P 3
  • b is the length data between P 1 and P 3
  • c is the length data between P 1 and P 2 .
  • the value of y can be obtained by the formula (10), and the value of y is substituted into the formula (9), and the value of x can be obtained.
  • the calculation of d 1 can be obtained by using the equations (4), (5) and (6). The value, and thus the values of d 2 and d 3 are calculated.
  • the terminal device can determine the corresponding spatial coordinate of the face reference point in the camera coordinate system according to the distance from the origin to the face reference point and the calculated unit vector, and calculate the space coordinate by using the following coordinate calculation formula:
  • the calculation of the space coordinates is completed. Then, based on the calculated first spatial coordinate and the second spatial coordinate, the coordinate offset value between the two is calculated, and the coordinate offset value is used as the face position offset The value is further obtained by further obtaining the display position offset value on which the display content is adjusted and controlled on the terminal screen, thereby performing display control.
  • step S103 the operation of acquiring the face position offset value in step S103 can be implemented as follows: S41-S43:
  • S41 (same as S21 above): acquiring image coordinates, which refers to coordinates of a face image in a plane coordinate system, including first image coordinates (current position) corresponding to the first face reference point, and second face The second image coordinate corresponding to the reference point (previous position);
  • the acquired image coordinates may be stored in the memory such that by comparing the first image coordinates with the second image coordinates, it may be determined whether a face offset has occurred. Specifically, if the two are the same, it means that no face offset occurs, and if the two are different, it means that the face is offset.
  • the currently obtained image coordinates may be stored only when the image coordinates obtained twice have changed.
  • it may first be determined whether to store the first image coordinate of the face reference point (previous position), and if so, the current face image is not the first face image, and the face is also collected before. The image, that is, the movement of the face occurs, the subsequent steps can be continued. If the first image coordinates are not present, no movement of the face occurs, and no subsequent operations are required.
  • the previous position of the face reference point in the coordinate system is stored,
  • the second image coordinate is changed with respect to the coordinate of the first image coordinate by comparing the first image coordinate and the second image coordinate as a face position offset value.
  • the magnitude of the face position offset can be represented by the difference between the second image coordinates and the first image coordinates.
  • the real-time display adjustment control may be performed according to an arbitrary offset of the face reference point, or the adjustment control may not be performed when the reference point has a slight offset.
  • an offset threshold may be set. Display adjustment control is performed when the face position offset value is greater than the offset threshold.
  • the face position offset value includes at least one of: an X coordinate offset value, a Y coordinate offset value, and a distance offset value;
  • the preset threshold includes at least one of: an X coordinate offset threshold , Y coordinate offset threshold, and distance offset threshold.
  • the face position offset value includes: an X coordinate offset value, and a Y seat
  • the offset value and the distance offset value may further determine whether the X coordinate offset value is greater than a preset X coordinate offset threshold, and determine whether the Y coordinate offset value is greater than a preset Y coordinate offset threshold. And determining whether the distance offset value is greater than a preset distance offset threshold, and if the determination result of any of the three is YES, performing subsequent operations of acquiring the display position offset value.
  • the face position offset value includes: an X coordinate offset value and a Y coordinate offset value.
  • a Y coordinate offset value correspondingly, whether the X coordinate offset value is greater than a preset X coordinate offset may be determined.
  • the threshold value is shifted, and whether the Y coordinate offset value is greater than a preset Y coordinate offset threshold is determined. If the determination result of either of the two is YES, the subsequent operation of acquiring the display position offset value is performed.
  • the face position offset value includes: an X coordinate offset value or a Y coordinate offset value. In this case, it may be determined whether only the X coordinate offset value is greater than a preset X coordinate offset threshold. Alternatively, it is determined whether the Y coordinate offset value is greater than a preset Y coordinate offset threshold. If the result is YES, the subsequent operation of acquiring the display position offset value is performed.
  • the face position offset value and the display position offset value may be equal or may not be equal.
  • the invention is not limited thereto.
  • Method 1 According to the face position offset value, the display position offset value is obtained, and the adjustment control of the display content is directly performed according to this.
  • the coordinate offset value is that the second spatial coordinate is shifted by 3 mm in the positive direction of the X-axis with respect to the first spatial coordinate
  • the display content in the screen is forward to the X-axis.
  • the translation is 3 mm.
  • the coordinate offset value is the second spatial coordinate moved by 3 mm in the positive direction of the Y-axis with respect to the first spatial coordinate
  • the display content in the screen can be performed.
  • the corresponding scale is reduced and so on.
  • the display position offset value may be acquired when the face position offset value is greater than the preset offset threshold, and then display control is performed. Specifically, reference is made to the description of the above example two.
  • the maximum allowable offset value can be set.
  • the set maximum tolerance allowable offset value means that the offset does not affect the relative integrity of the display of the displayed content, that is, the display is allowed. There is a certain amount of non-complete display, but the non-complete display does not affect the overall understanding of the display content; in another embodiment, an offset buffer is placed around the display screen, and the content display interface is not completely filled. The entire display screen, when displayed normally, the content display interface is in the center of the display screen, and an offset buffer is disposed around the content display interface, and the content display interface can be offset in the offset buffer when subsequent jitter occurs. The offset distance corresponding to the maximum boundary position of the offset buffer is the maximum allowable offset value.
  • the display position offset value may be further determined according to the face position offset value and the preset maximum allowable offset value.
  • the maximum allowable offset value here includes the maximum offset value in the X-axis direction and the maximum offset value in the Y-axis direction.
  • the absolute value of the X coordinate offset value and the maximum offset value in the X-axis direction may be smaller as the display position offset value in the X-axis direction; and/or the Y coordinate may be offset.
  • the purpose is to convert the user's excessive movement offset into a reasonable value, so that the display content can achieve the relative completeness of the display content after the reasonable offset value adjustment.
  • the vector direction of the display position offset value coincides with the vector direction of the face position offset value.
  • the display position offset value can be determined according to the following formula (1):
  • Dx2 (Dx1 ⁇ 0?-1:1) ⁇ Min(abs(Dx1), DxMax);
  • Dy2 (Dy1 ⁇ 0?-1:1) ⁇ Min(abs(Dy1), DyMax) (11)
  • Dx1 and Dy1 are the face position offset values calculated in step S4, that is, the X coordinate offset value and the Y coordinate offset value respectively;
  • Dx2 and Dy2 are respectively the display position offset values calculated in the current operation, that is, The X coordinate shows the position offset value and the Y coordinate shows the position offset value.
  • the display position offset value After the display position offset value is acquired, the content displayed on the terminal screen or the display content is displayed. At the interface, the offset in the same direction as the offset of the face reference point is performed according to the display position offset value.
  • a media player such as an improved video player or e-book reader.
  • the control display content or the display interface may be offset according to the vector display position offset value, or if only the absolute value of the display position offset value is obtained, the display content or The display interface performs an offset in the same direction as the face position offset value according to the display position offset value.
  • the display when the display control is performed, whether the display content or the display interface is adjusted according to the display position offset value or the display interface in the offset state is adjusted to be centered, the display can be performed at a constant speed.
  • the speed/acceleration may be displayed on the terminal screen according to the display position.
  • the content or the interface where the content is displayed is displayed and controlled. That is, the display position shift speed can be adapted to the speed at which the user's face reference point position is shifted (ie, the above-described face position shift speed).
  • the display position shift speed is based on the principle that the user does not feel the jitter as much as possible. For example, when the content display interface in the offset state is centered, since the user is basically in a stable state, the adjustment control speed that the user does not easily perceive can be employed.
  • the display content may be Normal display, for example, if the display interface of the content is in an offset display state, then the adjustment is centered; if the display interface is a normal display, the normal display is maintained.
  • a set threshold for example, 0.5 seconds, 1 second, 2 seconds, etc.
  • the display content may be Normal display, for example, if the display interface of the content is in an offset display state, then the adjustment is centered; if the display interface is a normal display, the normal display is maintained.
  • the timing threshold if yes, stop the timer, and control the display interface to display the reset, that is, according to the normal display mode of the display content (usually the center display); if there is no timer timing, the timer is started. Timing.
  • the terminal device can perform display control by using the above method at intervals (for example, a fixed period) to display the display content on the screen, that is, The terminal device only needs to complete the collection of the face image, the calculation of the space coordinates, and the adjustment of the display content in one time period, and record the display
  • the adjustment result of the adjustment of the content is displayed, and the display content in the screen can be displayed by using the adjustment result in each subsequent time period.
  • the terminal device may also perform the process of the foregoing method once every preset number of time periods.
  • the current face image of the user is acquired by the camera (S401), and the face reference point is extracted from the current face image, and the image coordinates of the face reference point in the image coordinate system are determined (S402). After that, it is determined whether the image coordinates of the last acquired face image in the image coordinate system, that is, the position of the last acquired face image are stored in the memory (S403), if not stored, the face position does not occur.
  • the offset of the image of the face reference point of the current face image is stored (S408). If it is stored, it indicates that the offset of the face position has occurred. At this time, calculating the current face image is equivalent to the last acquisition.
  • a face position offset value of the face image S404
  • a preset threshold value S405
  • Performing a subsequent display control operation that is, determining a display position offset value for moving the display content on the display screen based on the face position offset value and the preset maximum allowable offset value (S406), and according to The position of the offset value calculated for the currently displayed content for display control (S407).
  • FIG. 5A to FIG. 5C illustrate an application scenario of an embodiment of the present invention.
  • an anti-shake function button that is, a “VR” button in FIG. 5A is set, and when the user clicks the “VR” button, the system enters anti-shake.
  • the state can be reduced to a preset ratio at the same time, so as to leave a preset offset buffer around, as shown in FIG. 5B, when detecting that the user's face reference point is offset, the control display interface is performed.
  • the corresponding offset is shown in Figure 5C.
  • FIG. 6 is a block diagram of a display control device in accordance with a second embodiment of the present invention.
  • the display control device 60 includes an image acquisition unit 61 (for example, a camera), an extraction unit 63 (for example, a processor), a first calculation unit 65 (for example, a calculator), and a second calculation unit 67 (for example, a calculator), and a display control unit 69 (for example, a controller).
  • the apparatus may further comprise a storage unit (not shown, such as a memory) for storing the relevant data.
  • the image capturing unit 61 is configured to continuously collect a face image including a human face reference point, wherein the face image includes: a first face image and a second face image; and the extracting unit 63 is configured to use the image capturing unit 61.
  • Extracting a face reference point in the collected face image wherein the face reference point includes: a first face reference point included in the first face image and a second face reference included in the second face image a first calculation unit 65, configured to calculate a face position offset value according to the face reference point extracted by the extraction unit 63; a second calculation unit 67, configured to calculate a face position offset according to the first calculation unit 65
  • the value is used to calculate the display position offset value of the content displayed on the display screen;
  • the display control unit 69 is configured to perform display control according to the display position offset value calculated by the second calculating unit 67.
  • the first calculating unit 65 may have a structure mainly suitable for calculating a face position offset value in a space coordinate system, which includes: an image coordinate calculating unit 651 For obtaining image coordinates, the image coordinates include a first image coordinate corresponding to the first human face reference point and a second image coordinate corresponding to the second human face reference point; the spatial coordinate obtaining unit 653 is configured to calculate the unit 651 according to the image coordinate Obtaining image coordinates, obtaining spatial coordinates in the camera coordinate system, the spatial coordinates including a first spatial coordinate corresponding to the first human face reference point and a second spatial coordinate corresponding to the second human face reference point; the first offset value calculation unit 655. Calculate a coordinate offset value of the second spatial coordinate with respect to the first spatial coordinate as a face position offset value.
  • the space coordinate acquiring unit 653 may have the following structure, including: a first acquiring sub-element 653-1, configured to acquire corresponding normalized image coordinates of each face reference point in the camera coordinate system.
  • the second acquisition sub-element 652-3 is configured to calculate a unit vector of the origin to the face reference point in the camera coordinate system according to the normalized coordinates acquired by the first acquisition sub-element 653-1, and obtain a unit vector An angle of the first calculation sub-element 653-5 for the slave input unit 70 Calculating the distance from the origin to the face feature point in the camera coordinate system by calculating the length between the face feature points input in advance and the unit vector calculated by the second acquisition subunit 653-3; the space coordinate acquisition subunit 653- 7.
  • the distance between the origin-to-face feature point calculated by the first calculating sub-element 653-5 and the unit vector acquired by the second acquiring sub-element 653-3 determines the corresponding space of the face reference point in the camera coordinate system. coordinate.
  • FIG. 7 is a structural block diagram of another example of the embodiment, which is mainly different from the structure shown in FIG. 6 in that the first calculating unit 65 may have a function mainly for calculating a face position offset in a plane coordinate system.
  • the structure of the value includes: an image coordinate calculation unit 651, configured to acquire image coordinates, where the image coordinates include first image coordinates corresponding to the first human face reference point in the face image, and the second human face a second image coordinate corresponding to the reference point; and a second offset value calculating unit 657 configured to calculate a coordinate offset of the second image coordinate with respect to the first image coordinate as the face position offset value.
  • the above describes an embodiment in which the display of the control display screen is adjusted based on the offset or coordinate change of the face.
  • the present disclosure is not limited thereto, and the present disclosure also provides another scheme for performing display control according to a change in position of a terminal, which is compared with the first embodiment and the second embodiment described above, in which Shift or position change for display control.
  • which manner is used for display control that is, the display control is performed by referring to the change of the position of the terminal or the change of the position of the user's face, and may be determined based on the hardware capability of the terminal, for example, whether the terminal has the user reference object. (ie, feature points, for example, the ability to locate the eyes, nose, eyebrows, etc.). The details will be specifically described below in conjunction with the embodiments.
  • the terminal for implementing the fourth embodiment of the present embodiment and the following may be pre-configured with an anti-shake function at the factory, and the anti-shake function may be turned on or off according to an actual input, according to an actual input. It can be enabled by default in the normal state. The default is enabled by the configuration setting information in the configuration table.
  • FIG. 8 is a flow chart showing the method. As shown in FIG. 8, the method includes the following operations S81-S83. specifically:
  • the mobile data refers to the moving parameter of the terminal, and indicates the moving state of the terminal at a certain time or in the past.
  • it may be a moving parameter such as an initial speed, an acceleration, a rotating speed, a rotational acceleration, or the like, and may also be a parameter reflecting that the terminal has moved.
  • a change in angle with a reference object, a change in distance, etc. as long as a parameter for calculating a relative displacement between the terminal and the user is possible, the present invention is not limited thereto.
  • the operation can be implemented in at least two ways:
  • Method 1 Obtain the acceleration of the terminal along the screen direction; or obtain the acceleration and rotation angle of the terminal along the screen direction. In this way, the displacement of the terminal can be directly calculated.
  • Manner 2 Calculate the angle change value of the line between the screen of the terminal and the user reference position and the screen of the terminal. In this way, a reference parameter can be obtained for use as a basis for subsequent adjustment of the relative displacement.
  • the display position offset value that is, the offset value of the displayed image on the display screen of the terminal can be further calculated.
  • the display position offset value may be calculated according to the relative displacement described above, and the direction of the display position offset value is opposite to the direction of the relative displacement. Since the jitter of the terminal may be very large, sometimes by moving the content on the screen, sometimes it is impossible It completely cancels the jitter or sway of the terminal, but at least reduces the impact of this jitter.
  • a predetermined threshold may be set, and how to perform display control according to the magnitude relationship between the relative displacement and the threshold.
  • the display position offset value is less than the relative displacement, ie, the ratio of the display position offset value to the relative displacement is less than one.
  • an accelerometer and a gyroscope are set on the mobile phone for collecting data of the mobile phone jitter, thereby calculating the moving distance of the mobile phone in the direction of the screen, thereby controlling the display of the mobile phone screen, so that
  • the content displayed on the mobile phone (for example, text) produces a movement opposite to that of the mobile phone.
  • the mobile phone is shaking, the text displayed on the screen is relatively static, so it is easier to see the content on the screen.
  • the x-axis, the y-axis, and the z-axis are shown, wherein the z-axis is the distance from the user.
  • the displacements of the x-axis and the y-axis are calculated and referred to as Sx, Sy, respectively.
  • the rotation angles of the x-axis, the y-axis, and the z-axis are calculated by a gyroscope, and are denoted as Rx, Ry, and Rz, respectively.
  • the collected raw data is converted into pixel point data.
  • the size of the screen of the mobile phone is wide w, high h, and the resolution is Pw, Ph. This can be done by setting up a scaling module such as a calculator.
  • Psw (Pw/w)*Sx, where Psw is the number of displacement pixel points in the x-axis direction.
  • Psh (Ph/h)*Sy, where Psh is the number of shifted pixel points in the y-axis direction.
  • the adjustment control of the display content is performed. Specifically, according to the rotation angles Rx, Ry, Rz collected by the gyroscope, the screen content is rotated three-dimensionally within a certain range, thereby neutralizing the displacement of the offset device. This can be done by setting an adjustment module such as a controller.
  • the relative displacement between the mobile terminal and the user is further determined based on the movement data, thereby adjusting the display of the display content (eg, an image) on the display screen, so that the image is generated.
  • the movement in the opposite direction of the relative displacement cancels the vibration or jitter generated by the terminal; for the user, the jitter of the display content of the terminal can be visually reduced, so that the user can browse the display content more clearly and improve the user experience.
  • FIG. 10 For ease of understanding, an exemplary architecture of the mobile phone used in the embodiment of the present invention is further given. Referring specifically to FIG. 10, the architecture is also applicable to the first embodiment and the second embodiment described above.
  • the mobile phone architecture shown in FIG. 11 is merely exemplary and is intended to facilitate an understanding of the embodiments of the present invention, and is not limited in any way.
  • the mobile phone includes: a radio frequency (RF) circuit 610, a memory 620, an input unit 630, a display unit 640, a sensor 650, an audio circuit 660, a WiFi module 670, a processor 680, and a power supply 690.
  • RF radio frequency
  • the RF circuit 610 is used for transmitting and receiving information during signal transmission or reception, and in particular, receiving downlink information of the base station, and processing it by the processor 680, and transmitting the uplink data to the base station.
  • RF circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • RF circuitry 610 can also interact with the network and other devices via wireless communication.
  • the memory 620 can be used to store software programs and modules, and the processor 680 executes various functional applications and data processing of the mobile phone by running program software or modules stored in the memory 620 and/or calling data in the memory 620, specifically A method according to any of the above embodiments or examples of the present invention may be performed.
  • Memory 620 can include random access memory, nonvolatile memory, and the like.
  • the processor 680 is the control center of the mobile phone and connects various parts of the entire mobile phone using various interfaces and lines.
  • Processor 680 can include one or more processing units, and as an implementation, an application processor and a modem processor can be integrated.
  • the input unit 630 is configured to receive input information, and may be a touch panel 631 and other input devices.
  • the input unit 630 can be used as the input unit 70 in the second embodiment described above.
  • the display unit 640 is for displaying, and may include a display panel 641.
  • the touch panel 631 may be overlaid on the display panel 640.
  • Sensor 650 can be a light sensor, a motion sensor, or other sensor.
  • Motion sensing One type of accelerometer sensor can detect the acceleration of each direction (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related games, times). Ritual posture calibration), vibration recognition related functions (such as pedometer, tapping).
  • program code for implementing the present invention may also be stored in a computer readable storage medium.
  • Examples of computer readable storage media include, but are not limited to, magnetic disks, magneto-optical disks, hard disks, and the like.
  • the display control apparatus 110 includes an acquisition unit 111, a calculation unit 113, and a control unit 115.
  • the acquiring unit 111 is configured to acquire the movement data of the terminal; the acceleration of the terminal along the screen direction, the rotation angle and the acceleration of the terminal along the screen direction, and the connection between the screen of the acquisition terminal and the reference position of the user.
  • the acquisition unit 111 can be implemented by the above-described calculation meter and gyroscope.
  • the calculating unit 113 is configured to calculate a relative displacement between the terminal and the user according to the movement data. Specifically, according to the acceleration, the displacement of the terminal along the screen direction is calculated as a relative displacement between the terminal and the user; or the displacement of the terminal along the screen direction is calculated according to the acceleration and the rotation angle as a relative displacement between the terminal and the user.
  • the calculation unit 113 can be implemented by a calculator.
  • the control unit 115 is configured to control the content displayed on the display screen of the terminal to move in a direction opposite to the relative displacement.
  • the control unit 115 can be implemented by a controller.

Abstract

一种显示控制方法及装置,其中的一种显示控制方法包括如下处理:连续采集人脸图像,并提取人脸图像中包含的人脸基准点(101),其中,人脸图像包括:包含至少一个第一人脸基准点的第一人脸图像和包含至少一个第二人脸基准点的第二人脸图像;基于第一人脸基准点和第二人脸基准点,获取人脸位置偏移值(103);以及基于人脸位置偏移值,获取显示屏上显示的内容的显示位置偏移值,并根据所述显示位置偏移值进行显示控制(105)。可以实现根据终端或人脸的移动对显示内容进行调整控制,消除或削弱抖动的影响。

Description

显示控制方法及装置
本发明要求于2015年7月22日在中国专利局提交的申请号为201510433741.3、标题为“一种在移动终端进行媒体展示的控制方法及装置”的发明专利申请、于2015年7月9日在中国专利局提交的申请号为201510400254.7、标题为“一种显示控制方法、及移动终端”的发明专利申请、以及于2015年11月2日在中国专利局提交的申请号为201510731860.7、标题为“一种屏幕内容调整方法及其设备”的发明专利申请的优先权,其全部内容结合于此作为参考。
技术领域
本发明涉及计算机通信技术领域,更具体地,涉及一种显示控制方法及装置。
背景技术
随着计算机及通信技术的发展,诸如手机、个人数字助理(PDA)、平板电脑等的便携式终端已经越来越多地融入到用户的日常生活。借助于这些终端设备,用户可以进行通信、摄影、观看显示内容、以及游戏等其他娱乐活动。
在使用终端的过程中,经常会遇到抖动的情况。例如,在用户行走过程中,或者在乘坐交通工具等出行的过程中,会有抖动的问题,这里所说的抖动,是指所观看的显示内容相对于用户的眼睛的抖动。这是因为,在用户处于非静止状态时,很难使得手持的终端设备保持与眼睛的相对静止,这样,如果用户长时间观看屏幕,就可能会出现头晕不适等现象。此外,在抖动的情况下,终端的屏幕与用户的人脸之间的距离会产生变化,因此,屏幕中显示的内容的角度、方向等会随着与人脸距离的变化而变化,影响了内容浏览的效果。
发明内容
鉴于相关技术中存在的上述问题而提出本发明。本发明提供了一种显示控制方法及装置。
根据本发明的一个方面,提供了一种显示控制方法,包括如下操作:
连续采集人脸图像,并提取人脸图像中包含的人脸基准点,其中,人脸图像包括:包含至少一个第一人脸基准点的第一人脸图像和包含至少一个第二人脸基准点的第二人脸图像;
基于第一人脸基准点和第二人脸基准点,获取人脸位置偏移值;以及
基于人脸位置偏移值,获取显示屏上显示的内容的显示位置偏移值,并根据所述显示位置偏移值进行显示控制。
根据本发明的另一方面,提供了一种显示控制装置,包括:
图像采集单元,用于连续采集包含有人脸基准点的人脸图像,其中,所述人脸图像包括:第一人脸图像和第二人脸图像;
提取单元,用于提取人脸基准点,其中,人脸基准点包括:包含于第一人脸图像中的第一人脸基准点和包含于第二人脸图像中的第二人脸基准点;
第一计算单元,用于根据提取单元提取的人脸基准点,计算人脸位置偏移值;
第二计算单元,用于根据人脸位置偏移值,计算显示屏上显示的内容的显示位置偏移值;
显示控制单元,用于根据显示位置偏移值,进行显示控制。
根据本发明的另一方面,提供了另一种显示控制方法,包括如下操作:
获取终端的移动数据;
根据移动数据,确定终端与用户之间的相对位移;以及
控制终端的显示屏上显示的内容向与相对位移相反的方向移动。
根据本发明的另一方面,提供了一种显示控制装置,包括:
获取单元,用于获取终端的移动数据;
计算单元,用于根据移动数据,计算终端与用户之间的相对位移;以及
控制单元,用于控制终端的显示屏上显示的内容向与相对位移相反的方向移动。
借助于本发明的上述任一技术方案,在出现抖动的情况下,可以根据人脸或终端的移动,调整终端屏幕的显示,使得终端屏幕与用户之间的相对移动最小化甚至保持相对静止,进而削弱或消除由于抖动而引起的不适。
附图说明
图1是示出根据本发明第一实施例的显示控制方法的概要流程图;
图2是示出根据本发明第一实施例的实例一的流程图;
图3是示出基于人脸基准点获取空间坐标的示意图;
图4A是示出根据本发明第一实施例的实例二的流程图;
图4B是示出根据本发明第一实施例的显示控制方法的一个示意流程图;
图5A-5C是示出根据本发明实施例的显示控制方法的一个应用场景的示意图;
图6是示出根据本发明第二实施例的一个实例的显示控制装置的结构框图;
图7是示出根据本发明第二实施例的另一个实例的显示控制装置的结构框图;
图8是根据本发明第三实施例的显示控制方法的流程图;
图9是示出根据本发明第三实施例的显示控制方法的示意图;
图10是示出用于实施本发明实施例的手机的架构示意图;以及
图11是示出根据本发明第四实施例的显示控制装置的框图。
具体实施方式
在以下的描述中,“人脸图像”,是指在预定周期内连续获取的多个人脸图像,或者以预定时间间隔连续获取的多个人脸图像中;为了便于描述,在本文中,将以采集到的多个人脸图像中的相邻的两个人脸图像为例进行说明;在以下的描述中,作为一种实例,人脸图像的采集通过终端设备的前置摄像头来完 成;“人脸基准点”,是指人脸上的特征点在图像中对应的点,人脸上的特征点可以包括但不限于用户的鼻子、双眼、嘴巴等中的至少一个,或者,人脸基准点也可以设定为人脸图像上双眼连线与鼻子嘴巴连线之间的交叉点、某只眼睛的聚焦点、或两只眼睛的聚焦点等等。
在本文中提到的“显示控制”,或者“显示调整”,其方式包括但不限于以下之一或其组合:平移处理、放大处理、缩小处理等。本发明实施例涉及的终端设备可以包括但不限于:平板电脑、智能手机、笔记本电脑、PDA、掌上电脑、个人计算机、以及移动互联网设备(MID)等具备内容数据显示的终端设备;以及诸如手持游戏机、诸如车载电脑显示屏的车载设备、销售终端(Point of Sales,POS)以及其他手持设备的移动终端。屏幕上显示的内容可以包括文字、图片、图像、媒体等显示内容。
基于以上概述,以下将进一步结合附图具体描述本发明的实施例。
如上所述,在相关技术中,在用户使用便携式终端设备的过程中,例如,在用户行进过程中,会存在抖动的问题。为此,本发明实施例提供的技术方案,将根据用户人脸的位置变化、或者用户与终端的相对位置变化,来调整控制屏幕的显示,从而平衡或抵消上述的相对位置变化,进而削弱或消除由于抖动而引起的不良屏幕观看效应。
第一实施例
首先,作为用于实现本实施例以及下文的第二实施例的终端,可以在出厂时就配置有防抖功能。响应于用户的防抖功能开启指令,开启终端的前置摄像头。即,当用户打开了防抖功能开关的时候,打开终端的前置摄像头,并开启防抖处理功能。以及,响应于用户的防抖功能关闭指令,关闭终端的前置摄像头,并关闭防抖处理功能。通过设置用户可操作的防抖功能开启及结束功能,可在用户有防抖(即,显示控制)需求时,才进行该防抖处理,在用户没有防抖需求时,则停止防抖处理,可以节约系统资源,降低终端设备的资源占用率。当然,本发明实施例的方法也可以通过用户下载集成了根据本发明实施例的显示控制方法的APP来实现。本发明对此不作限定。
第一实施例提供了一种显示控制方法,在该方法中,根据用户的人脸的位置的变化来控制屏幕的显示。图1是根据本发明第一实施例的显示控制方法的流程图,如图1所示,该方法包括如下的操作S101-S105:
S101,连续采集人脸图像,并提取人脸图像中包含的人脸基准点。
为了便于描述,这里的人脸图像分别是第一人脸图像和第二人脸图像,其中,第一人脸图像可以是当前采集的人脸图像,第二人脸图像可以是与当前采集的人脸图像相邻的采集的上一个人脸图像。例如,第一人脸图像可以包括至少一个第一人脸基准点,第二人脸图像可以包括至少一个第二人脸基准点。基准点的提取,可以基于人脸识别技术从当前人脸图像中定位人脸来完成。
S103,基于第一人脸基准点和第二人脸基准点,获取人脸位置偏移值。在该步骤中,相当于判断人脸基准点在相邻两张人脸图像中的位置偏移,即当前人脸图像(第一人脸图像)相比于前一张人脸图像(第二人脸图像)的人脸基准点的位置偏移。
S105,基于人脸位置偏移值,获取显示屏上显示的内容的显示位置偏移值,并根据所述显示位置偏移值进行显示控制。该显示位置偏移值用于将当前屏幕显示的内容或者显示界面与人脸基准点进行同向的偏移,使得人眼不会感觉到显示内容相对于眼睛的抖动。
以下将具体描述以上操作的细节。
关于步骤S101中人脸图像采集,本文中提供了两种示例性方式,当然本发明不限于此。
方式一:在预定周期内采集。这里的预定周期可以是根据内置陀螺仪获取的一个抖动波段的时间,其可以为用户行走一步的时间,或者可以为交通工具在行驶过程中的一个抖动波段的时间,为了避免对一个抖动波段的误判,可以通过加入两个加速度阈值进行判断,例如:针对用户行走一步的时间,可以预先设定A、B两个加速度阈值,其中,A小于B,通过内置陀螺仪对用户行走过程中的加速度波段进行获取,终端设备可以选取加速度波段中如下波段对应的时长作为用户行走一步的时间:加速度数据开始大于B,并在随后波段变化过程中,加速度波段中加速度数据开始小于A的这一波段。
方式二:以预定时间间隔采集。例如,每隔0.05秒、0.1秒、0.2秒、0.5秒、1秒等采集一张人脸图像。这里的预定时间间隔可以根据实际需要来设置,本发明对此不做具体限定。
获取人脸位置偏移值
人脸位置偏移值,可以是指平面坐标中的偏移值,也可以是诸如摄像头坐标系的其他设定空间坐标系中的偏移值,本发明对此没有限制,以下将分别结合实例进行描述。其中,实例一:基于摄像头坐标系;实例二:基于平面坐标系。
实例一:基于摄像头坐标系
在该实例一中,如图2所示,步骤S103中获取人脸位置偏移值的操作可以通过如下的S21-S25实现:
S21:获取图像坐标:该图像坐标是指人脸图像在平面坐标系中的坐标,包括第一人脸基准点对应的第一图像坐标以第二人脸基准点对应的第二图像坐标;
S23:获取空间坐标:根据图像坐标,获取设定空间坐标系中的空间坐标,空间坐标包括第一人脸基准点对应的第一空间坐标以及第二人脸基准点对应的第二空间坐标;例如,上述的设定空间坐标系可以是摄像头坐标系,该摄像头坐标系在终端设备中预先设定,其以摄像头的光轴中心为原点;
S25:将第二空间坐标相对于第一空间坐标的坐标偏移值作为上述人脸位置偏移值,从而完成获取人脸位置偏移值的操作。
可选地,在获取人脸位置偏移值时,可以进一步根据采集人脸图像的时间间隔以及人脸位置偏移值,获取第一人脸基准点(当前人脸基准点)相对于第二人脸基准点(前次人脸基准点)的位置偏移的移动速度和/或加速度信息,并且后续在计算显示位置偏移值时,可以根据该移动速度和/或加速度信息计算出显示位置偏移速度,并根据该显示位置偏移速度进行显示控制。
在该实例中,在采集人脸图像之前,终端可以获取从外部输入的至少两个 人脸基准点之间的长度数据。例如,该长度数据可以经过用户手动测量后输入,可以包括鼻子分别到两个眼睛的距离,以及两个眼睛之间的距离。
基于此,在该实例中,通过如下操作实现获取空间坐标:
首先,获取图像坐标在摄像头坐标系中对应的归一化坐标;根据归一化坐标,计算摄像头坐标系中的原点到人脸基准点的单位向量,并获取单位向量之间的夹角;根据预先输入的人脸基准点之间的长度数据以及单位向量之间的夹角,计算摄像头坐标系中的原点到人脸基准点的距离;根据原点到人脸基准点的距离以及单位向量,计算人脸基准点在摄像头坐标系中对应的空间坐标,从而完成获取空间坐标的操作。
图3示出了基于人脸基准点获取空间坐标的示意图。这里以采集两个人脸图像的情况为例进行说明,但是也可以采用两个以上的人脸图像,本发明对此不作限定。假设人脸图像中包括三个人脸基准点,这三个人脸基准点在摄像头坐标系中对应的空间坐标分别为P1、P2、和P3,终端设备可以利用图像处理技术获取两个人脸图像中的每个人脸图像的人脸图像基准点对应的图像坐标,分别为(u1,v1)、(u2,v2)和(u3,v3)(即,上述获取图像坐标的操作),然后,可以利用如下归一化公式在摄像头坐标系中获取图像坐标对应的归一化坐标(X1c1,Y1c1)、(X1c2,Y1c2)、和(X1c3,Y1c3)(即,上述获取归一化坐标的操作),该归一化公式为:
Figure PCTCN2016084586-appb-000001
终端设备可以采用上述归一化坐标计算摄像头坐标系中的原点O到人脸基准点的单位向量,即,从O点分别到P1、P2、和P3三点的单位向量e1、e2、和e3,所采用的单位向量计算公式为:
Figure PCTCN2016084586-appb-000002
终端设备获取单位向量间的夹角,记e2与e3的夹角为α,e1与e3的夹角为β,e1与e2的夹角为γ,则有:
Figure PCTCN2016084586-appb-000003
然后,终端设备可以根据人脸基准点间的长度数据以及单位向量间的夹角,计算摄像头坐标系中的原点到人脸基准点的距离,假设O点到P1、P2、和P3三点的距离分别为d1、d2、和d3,则得出:
Figure PCTCN2016084586-appb-000004
Figure PCTCN2016084586-appb-000005
Figure PCTCN2016084586-appb-000006
其中,a表示为P2和P3间的长度数据,b表示为P1和P3间的长度数据,c表示为P1和P2间的长度数据。
设:
Figure PCTCN2016084586-appb-000007
代入(1)、(2)、(3)式,得到:
Figure PCTCN2016084586-appb-000008
Figure PCTCN2016084586-appb-000009
Figure PCTCN2016084586-appb-000010
连立(4)、(5)式,消除
Figure PCTCN2016084586-appb-000011
得到:
Figure PCTCN2016084586-appb-000012
连立(5)、(6)式,消除
Figure PCTCN2016084586-appb-000013
得到:
Figure PCTCN2016084586-appb-000014
连立(7)、(8)式,消除x2,得到:
Figure PCTCN2016084586-appb-000015
将(9)式代入(7)式中,得到:
a4y4+a3y3+a2y2+a1y+a0=0......(10)
其中:
Figure PCTCN2016084586-appb-000016
Figure PCTCN2016084586-appb-000017
Figure PCTCN2016084586-appb-000018
Figure PCTCN2016084586-appb-000019
Figure PCTCN2016084586-appb-000020
由(10)式计算可以得到y的值,并将y的值代入(9)式中,可以得到x的值,利用(4)、(5)和(6)三式计算可以得到d1的值,进而计算得到d2和d3的值。
然后,终端设备可以根据原点到人脸基准点的距离以及计算得到的单位向量,确定人脸基准点在摄像头坐标系中对应的空间坐标,计算空间坐标可以采用如下坐标计算公式:
Pi=di*ei(i=1,2,3)
至此,完成了空间坐标的计算。然后,将基于计算的第一空间坐标和第二空间坐标,计算二者之间的坐标偏移值,并将该坐标偏移值作为人脸位置偏移 值,并进一步获得在终端屏幕上对显示内容进行调整控制所依据的显示位置偏移值,进而进行显示控制。
实例二:平面坐标系
在该实例二中,如图4A所示,步骤S103中获取人脸位置偏移值的操作可以如下的S41-S43实现:
S41(与上述S21相同):获取图像坐标,该图像坐标是指人脸图像在平面坐标系中的坐标,包括第一人脸基准点对应的第一图像坐标(当前位置)以第二人脸基准点对应的第二图像坐标(前一次位置);
作为一种可选方式,可以将获取的图像坐标存储在存储器中,这样,通过比较第一图像坐标和第二图像坐标,即可判断是否发生了人脸偏移。具体地,如果二者相同,说明没有发生人脸偏移,反之,如果二者不同,则说明发生了人脸偏移。
作为另一种可选的方式,可以仅在先后两次获得的图像坐标有变化时,才存储当前获得的图像坐标。这样,在实现过程中,可以首先判断是否存储有人脸基准点在的第一图像坐标(前一次位置),如果是,则说明当前人脸图像不是第一张人脸图像,之前也采集有人脸图像,即,发生了人脸的移动,可以继续后面的步骤,如果不存在第一图像坐标,则说明没有发生人脸的移动,可无需进行后续操作。在存储有所述人脸基准点在所述坐标系中的前一次位置时,
S43:通过比较第一图像坐标和第二图像坐标,将第二图像坐标相对于第一图像坐标的坐标变化,作为人脸位置偏移值。
人脸位置偏移的幅度可以通过第二图像坐标与第一图像坐标的差异来表示。在具体实现过程中,可以根据人脸基准点的任意偏移进行实时显示调整控制,也可以在基准点存在微小偏移情况下不进行调整控制,此时,可以设置一个偏移阈值,在该人脸位置偏移值大于偏移阈值时才进行显示调整控制。
例如,人脸位置偏移值包括如下至少之一:X坐标偏移值、Y坐标偏移值、以及距离偏移值;相对应地,预设阈值包括如下至少之一:X坐标偏移阈值,Y坐标偏移阈值、以及距离偏移阈值。
具体地,作为一种实现方式,人脸位置偏移值包括:X坐标偏移值、Y坐 标偏移值、和距离偏移值,进一步地,可以判断X坐标偏移值是否大于预先设定的X坐标偏移阈值,判断Y坐标偏移值是否大于预先设定的Y坐标偏移阈值,以及判断距离偏移值是否大于预先设定的距离偏移阈值,若三者中任一个的判断结果为是,则执行后续的获取显示位置偏移值的操作。
作为另一种实现方式,人脸位置偏移值包括:X坐标偏移值和Y坐标偏移值,本步骤中,相应地,可以判断X坐标偏移值是否大于预先设定的X坐标偏移阈值,以及判断Y坐标偏移值是否大于预先设定的Y坐标偏移阈值,若二者中任一个的判断结果为是,则执行后续的获取显示位置偏移值的操作。
作为再一种实现方式,人脸位置偏移值包括:X坐标偏移值或Y坐标偏移值,此时,可以只判断X坐标偏移值是否大于预先设定的X坐标偏移阈值,或者判断Y坐标偏移值是否大于预先设定的Y坐标偏移阈值,若结果为是,则执行后续的获取显示位置偏移值的操作。
需要说明的是,上述根据偏移阈值决定是否进行显示调整控制的操作,同样适用于上述实例一,在此不再赘述。
获取显示位置偏移值
以下,示例性地提供了三种获取显示位置偏移值的方式,将分别进行说明。通过以下的描述将可以看出,在本发明实施例中,人脸位置偏移值和显示位置偏移值可以相等,也可以不相等。本发明对此没有限制。
方式1:根据人脸位置偏移值,得到显示位置偏移值,并直接据此进行显示内容的调整控制。以实例一的操作为例,例如,若坐标偏移值是第二空间坐标相对于第一空间坐标向X轴正方向平移了3毫米,则将所述屏幕中的显示内容向X轴正方向平移了3毫米,同理,在Z轴方向,若坐标偏移值是第二空间坐标相对于第一空间坐标向Y轴正方向移动了3毫米,则可以将所述屏幕中的显示内容进行相应的比例缩小等。
方式2:可以在判断人脸位置偏移值大于预设偏移阈值时才进行获取显示位置偏移值,进而进行显示控制。具体地,参照上述实例二的描述。
方式3:可以设置最大容许偏移值,在一种实施方式中,设定的最大容忍容许偏移值是指该偏移量不会影响显示内容展示的相对完整性,即容许显示内 容有一定量的非完整展示,但该非完整展示不会影响对显示内容的整体理解;在另一种实施方式中,在显示屏的周围设置一偏移缓冲区,内容显示界面并不完全充满整个显示屏,正常显示时,内容显示界面处于显示屏的居中位置,在其周围设置有偏移缓冲区,在后续出现抖动时,内容显示界面可在该偏移缓冲区内进行偏移,则对应该偏移缓冲区的最大边界位置的偏移距离便是最大容许偏移值。
具体地,在人脸位置偏移值大于预设偏移阈值时,可以进一步根据人脸位置偏移值以及预设的最大容许偏移值,确定显示位置偏移值。这里的最大容许偏移值包括可以X轴方向最大偏移值、Y轴方向最大偏移值。
以显示内容是视频内容的情况为例进行说明,此时,可以设置X轴方向最大偏移值DxMax=DeviceWidth×0.2,Y轴方向最大偏移值DyMax=DeviceHeight×0.1。又例如,对于文本内容,可以设置X轴方向最大偏移值DxMax=DeviceWidth×0.05,Y轴方向最大偏移值DyMax=DeviceHeight×0.02,其中,DeviceWidth为终端设备宽度,DeviceHeight为终端设备高度。
以实例二的操作为例,可以将X坐标偏移值和X轴方向最大偏移值中的绝对值较小者,作为X轴方向的显示位置偏移值;和/或将Y坐标偏移值和Y轴方向最大偏移值中的绝对值较小者,作为Y轴方向的显示位置偏移值。目的是将用户过大的移动偏移转化成一个合理的值,使得显示内容在这合理的偏移值调整后能达到显示内容观看的相对完整。
该显示位置偏移值的矢量方向与人脸位置偏移值的矢量方向一致。
例如,可按照如下公式(1)进行确定显示位置偏移值:
Dx2=(Dx1<0?-1:1)×Min(abs(Dx1),DxMax);
Dy2=(Dy1<0?-1:1)×Min(abs(Dy1),DyMax)    (11)
其中,Dx1和Dy1分别为步骤S4中计算的人脸位置偏移值,即,X坐标偏移值和Y坐标偏移值;Dx2和Dy2分别为当前操作中计算的显示位置偏移值,即,X坐标显示位置偏移值和Y坐标显示位置偏移值。
显示控制
在获取了显示位置偏移值之后,将终端屏幕上显示的内容或者显示内容所 在的界面,根据显示位置偏移值,进行与人脸基准点的偏移同向的偏移。具体实现时,可由改进的视频播放器、电子书阅读器等媒体播放器实现。
如果上述显示位置偏移值是矢量值,则控制显示内容或显示界面按照矢量显示位置偏移值进行偏移即可,或者,若只得到显示位置偏移值的绝对数值,则控制显示内容或显示界面按照显示位置偏移值进行与人脸位置偏移值同向的偏移。
本实施例中,在进行显示控制时,无论是将显示内容或显示界面根据显示位置偏移值进行调整控制,还是将处于偏移状态的显示界面调整为居中,均可以以一定的速度进行。可选地,在前面计算了人脸位置偏移速度/加速度,并据此计算出了显示位置偏移速度/加速度的情况下,可以根据该显示位置偏移速度/加速度对终端屏幕上显示的内容或者显示内容所在的界面进行显示控制。即显示位置偏移速度可与用户人脸基准点位置偏移的速度(即,上述人脸位置偏移速度)相适应。该显示位置偏移速度以尽量不使用户感觉到抖动为原则。例如,当将处于偏移状态的内容显示界面调整居中时,由于用户基本处于稳定状态,因此可以采用用户不太容易察觉的调整控制速度。
在具体实现过程中,如果用户保持终端设备相对稳定,并且保持的时长超过一设定阈值,例如0.5秒、1秒、2秒等,则可认为用户现在处于稳定状态,相应地,显示内容可以正常展示,例如,若内容的显示界面处于偏移显示状态,则将其调整居中;若显示界面为正常展示,则保持该正常展示。具体地,本实施例中,在先前判断人脸位置偏移值不大于预先设定的偏移阈值时,判断当前是否有计时器计时,如果有,则判断计时器的计时时间是否达到设定的计时阈值,如果是,则停止计时器的计时,并控制显示界面进行复位显示,即按照显示内容的正常显示方式(通常为居中展示)显示;如果当前没有计时器计时,则启动计时器的计时。
另外,可以周期性地调用对屏幕中的显示内容进行调整控制的结果,在终端屏幕进行显示。以用户行走为例,其抖动过程相对规律,因此为了减少设备耗能,终端设备可以每隔一段时间(例如,固定周期)就采用上述方法进行显示控制进而对屏幕中的显示内容进行显示,即,终端设备只需在一个时间周期内完成人脸图像的采集、空间坐标的计算、以及显示内容的调整,并记录对显 示内容进行调整的调整结果,在后续的每一个时间周期内均可以采用该调整结果对所述屏幕中的显示内容进行显示。当然,为了在减少设备耗能的基础上进一步保证对显示内容进行调整控制的准确性,终端设备也可以每隔预设数量的时间周期执行一次上述方法的过程。
下面,将以实例二描述的基于平面坐标系的获取显示位置偏移量的方案为例,并结合图4B的流程图,进一步描述根据本发明第一实施例的显示控制流程。
如图4B所示,首先,通过摄像头采集用户的当前人脸图像(S401),并且从当前人脸图像中提取人脸基准点,确定人脸基准点在图像坐标系中的图像坐标(S402);之后,判断存储器中是否存储有上一次采集的人脸图像在图像坐标系中的图像坐标,即,上一次采集的人脸图像的位置(S403),如果没有存储,说明没有发生人脸位置的偏移,则存储当前人脸图像的人脸基准点的图像坐标(S408),如果存储了,说明发生了人脸位置的偏移,此时,计算当前人脸图像相当于上一次采集的人脸图像的人脸位置偏移值(S404),并将获得的人脸位置偏移值与预设阈值进行比较(S405),之后,在判断人脸位置偏移值大于预设阈值时,进行后续的显示控制操作,即,基于人脸位置偏移值和预先设定的最大容许偏移值,确定用于在显示屏幕上移动显示内容的显示位置偏移值(S406),并根据计算的显示位置偏移值对当前显示的内容进行显示控制(S407)。
为了便于理解上述操作,图5A至图5C示出了本发明实施例的一种应用场景。如图5A所示,在移动终端(例如iPad)的显示界面中,设置有一个防抖功能按钮,即图5A中的“VR”按钮,当用户点击该“VR”按钮时,系统进入防抖状态,同时可以将显示界面缩小至预设比例,以便在四周留出预设的偏移缓冲区,如图5B所示,当检测到用户的人脸基准点发生偏移时,控制显示界面进行相应的偏移,如图5C所示。
第二实施例
根据本发明的实施例,提供了一种显示控制装置,该装置可以用于实现上述根据第一实施例的显示控制方法。图6是根据本发明第二实施例的显示控制装置的框图。如图6所示,该显示控制装置60包括图像采集单元61(例如,摄像头)、提取单元63(例如,处理器)、第一计算单元65(例如,计算器)、第二计算单元67(例如,计算器)、以及显示控制单元69(例如,控制器)。可选地,该装置还可以包括用于对相关数据进行存储的存储单元(未示出,例如,存储器)。
具体地,图像采集单元61用于连续采集包含有人脸基准点的人脸图像,其中,人脸图像包括:第一人脸图像和第二人脸图像;提取单元63用于从图像采集单元61采集的人脸图像中提取人脸基准点,其中,人脸基准点包括:包含于第一人脸图像中的第一人脸基准点和包含于第二人脸图像中的第二人脸基准点;第一计算单元65,用于根据提取单元63提取的人脸基准点,计算人脸位置偏移值;第二计算单元67,用于根据第一计算单元65计算的人脸位置偏移值,计算显示屏上显示的内容的显示位置偏移值;显示控制单元69,用于根据第二计算单元67计算的显示位置偏移值,进行显示控制。
进一步地,如图6所示,在一个实例中,第一计算单元65可以具有如下结构,该结构主要适用于在空间坐标系中计算人脸位置偏移值,其包括:图像坐标计算单元651,用于获取图像坐标,图像坐标包括第一人脸基准点对应的第一图像坐标以及第二人脸基准点对应的第二图像坐标;空间坐标获取单元653,用于根据图像坐标计算单元651获取的图像坐标,获取摄像头坐标系中的空间坐标,空间坐标包括第一人脸基准点对应的第一空间坐标以及第二人脸基准点对应的第二空间坐标;第一偏移值计算单元655,计算第二空间坐标相对于第一空间坐标的坐标偏移值,作为人脸位置偏移值。
在一个实例中,上述的空间坐标获取单元653可以具有如下结构,其包括:第一获取子单元653-1,用于获取每个人脸基准点对应的图像坐标在摄像头坐标系中对应的归一化坐标;第二获取子单元653-3,用于根据第一获取子单元653-1获取的归一化坐标,计算摄像头坐标系中原点到人脸基准点的单位向量,并获取单位向量间的夹角;第一计算子单元653-5,用于根据从输入单元70 预先输入的人脸特征点间的长度数据以及第二获取子单元653-3计算的单位向量间的夹角,计算摄像头坐标系中原点到人脸特征点的距离;空间坐标获取子单元653-7,用于根据第一计算子单元653-5计算的原点到人脸特征点的距离以及第二获取子单元653-3获取的单位向量,确定人脸基准点在摄像头坐标系中对应的空间坐标。
如图7所示是本实施例的另一个实例的结构框图,其与图6所示结构的主要区别在于,第一计算单元65可以具有主要适用于在平面坐标系中计算人脸位置偏移值的结构,如图7所示,其包括:图像坐标计算单元651,用于获取图像坐标,图像坐标包括人脸图像中的第一人脸基准点对应的第一图像坐标以及第二人脸基准点对应的第二图像坐标;以及第二偏移值计算单元657,用于计算第二图像坐标相对于第一图像坐标的坐标偏移量,并将其作为人脸位置偏移值。
第二实施例中各个部件的操作细节,可以参考上述第一实施例来理解和实施,在不冲突的情况下,第一实施例和第二实施例中的特征可以相互组合,为了不必要的模糊本发明,在此不再赘述。
以上描述了基于人脸的偏移或坐标变化来调整控制显示屏的显示的实施例。本公开不限于此,本公开还提供了另一种根据终端的位置变化来进行显示控制的方案,相比于上述第一实施例和第二实施例,在该方案中,将根据终端的偏移或位置变化来进行显示控制。在具体实现中,采用哪种方式进行显示控制,即,参照终端的位置变化还是用户人脸的位置变化来进行显示控制,可以基于终端的硬件能力来确定,例如,终端是否具有获得用户参照物(即,特征点,例如,定位眼睛、鼻子、眉心等)的能力。以下将结合实施例具体描述。
第三实施例
首先,用于实现本实施例以及下文中的第四实施例的终端,可以在出厂时预先配置有防抖功能,该防抖功能可以是根据实际需要根据用户输入的指令而开启或关闭,也可以是常规状态下默认开启,默认开启可以通过配置表中的配置设置信息实现。
根据本发明第三实施例,提供了一种显示控制方法,图8是示出该方法的流程图,如图8所示,该方法包括如下的操作S81-S83。具体地:
S81,获取终端的移动数据。
S83,根据移动数据,确定终端与用户之间的相对位移。
S85,控制终端的显示屏上显示的内容向与相对位移相反的方向移动。
下面将详细描述上述操作的细节。
获取移动数据
移动数据是指终端的移动参数,表示终端当前或过去某段时间的移动状态,例如,可以是初始速度,加速度,旋转速度,旋转加速度等移动参数,也可以是体现终端发生了移动的参数,例如,与参照物之间的角度变化,距离变化等等,只要用于计算终端与用户之间的相对位移的参数都可以,本发明对此没有限制。作为实例,该操作至少可以通过如下两种方式来实现:
方式一:获取终端沿屏幕方向的加速度;或者获取终端沿屏幕方向的加速度和旋转角度。这样,可以直接计算出终端的位移量。
方式二:计算终端的屏幕与用户参照位置之间的连线与终端的屏幕相交的角度变化值。通过该方式,可以获得一个参考参数,用作后续相对位移的调整的依据。
确定相对位移
依据获得的终端加速度,计算终端在屏幕方向上的位移,作为终端与用户之间的相对位移;或者,依据获得的加速度与旋转角度,计算终端在屏幕方向上的位移,作为终端与用户之间的相对位移。
可以理解的是,如果获取角度变化值作为终端的移动数据,则角度变化值越大,相对位移越大。
显示控制
在进行显示调整控制时,作为一种实现方式,可以进一步计算显示位置偏移值,即,终端的显示屏上的显示的图像的偏移值。具体地,可以根据上述相对位移计算显示位置偏移值,该显示位置偏移值的方向与相对位移的方向相反。由于终端的抖动可能非常大,因此,通过移动屏幕上的内容,有时候不能 完全抵消终端的抖动或晃动,但是至少可以减少这种抖动的影响。
作为一种实现方式,可以设置一个预定阈值,并根据相对位移与阈值的大小关系,来决定如何进行显示控制。
具体地,如果相对位移大于预定阈值,则显示位置偏移值小于相对位移,即,显示位置偏移值与相对位移的比例小于1。
以下通过一个实例来进一步描述根据本发明第三实施例的显示控制方法的操作。在该实例中,以手机为例,在手机上设置了加速计和陀螺仪,用于采集手机抖动的数据,据此计算出手机在屏幕方向上的移动距离,进而控制手机屏幕的显示,使得手机上显示的内容(例如,文字)产生与手机的移动相反的移动,对于用户而言,虽然手机在抖动,但是屏幕上显示的文字是相对静止的,因此更容易看清屏幕上的内容。
实例
首先,如图9所示,示出了x轴、y轴、z轴,其中,z轴为与用户的距离。通过加速计采集终端(手机)在屏幕方向x轴,y轴上的加速度,采集的时间间隔为t。
通过位移公式:s=v0t+at2
其中,s=位移,v0=初始化速度,a=加速度,t=时间
计算x轴和y轴的位移,分别记作Sx,Sy。
通过陀螺仪计算x轴、y轴、z轴的旋转角度,分别记作Rx,Ry,Rz。
然后,将采集到的上述原始数据,转换成像素点数据。假设手机屏幕的尺寸为宽w,高h,分辨率为Pw,Ph。该操作可以通过设置一个诸如计算器的换算模块来实现。
Psw=(Pw/w)*Sx,其中,Psw是x轴方向上的位移像素点数。
Psh=(Ph/h)*Sy,其中,Psh是y轴方向上的位移像素点数。
之后,根据计算的Psw,Psh,进行显示内容的调整控制。具体地,根据陀螺仪采集到的旋转角度Rx,Ry,Rz,在一定范围内三维旋转屏幕内容,从而中和抵消设备的位移。该操作可以通过设置一个诸如控制器的调整模块来实现。
如上所述,在本实施例中,通过获取终端的移动数据,进一步基于此确定移动终端与用户之间的相对位移,进而调整显示屏上的显示内容(例如,图像)的显示,使得图像产生与相对位移反方向的移动,从而与终端产生的震动或抖动相抵消;对于用户而言,可以从视觉上减少终端显示内容的抖动,使得用户可以比较清楚地浏览显示内容,提高用户体验。
为了便于理解,进一步给出了本发明实施例所使用的手机的示例性架构,具体参照图10,该架构同样可以适用于上述的第一实施例及第二实施例。图11所示的手机架构仅仅是示例性的,目的在于便于理解本发明实施例,而非进行任何形式的限制。
如图10所示,手机包括:射频(Radio Frequency,RF)电路610,存储器620,输入单元630,显示单元640,传感器650,音频电路660,WiFi模块670,处理器680,以及电源690等部件。
其中,RF电路610用于收发信息或通话过程中的信号发送和接收,特别地,接收基站的下行信息,交由处理器680处理,并将上行数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。另外,RF电路610还可以通过无线通信与网络和其他设备交互。
存储器620可以用于存储软件程序和模块,处理器680通过运行存储在存储器620中的程序软件或模块,和/或调用存储器620中的数据,执行手机的各种功能应用及数据处理,具体地,可以执行根据本发明任一上述实施例或实例的方法。存储器620可以包括随机存取存储器,非易失存储器等。处理器680是手机的控制中心,利用各种接口和线路连接整个手机的各个部分。处理器680可以包括一个或多个处理单元,作为一种实现,可以集成应用处理器和调制解调处理器。
输入单元630用于接收输入信息,可以是触控面板631以及其他输入设备。该输入单元630可以用作上述第二实施例中的输入单元70。
显示单元640用于进行显示,可以包括显示面板641。触控面板631可以覆盖在显示面板640上。
传感器650可以是光传感器、运动传感器、或其他传感器。作为运动传感 器的一种,加速计传感器可以检测各个方向(一般是三轴)的加速度大小,静止时,可以检测重力大小及方向,可以用于识别手机姿态的应用(比如横竖屏切换,相关游戏、次礼记姿态校准)、震动识别相关功能(比如计步器、敲击)等。
另外,用于实现本发明的程序代码还可以存储在计算机可读存储介质中。计算机可读存储介质的实例包括但不限于磁盘、磁光盘、硬盘等。当数据处理设备读取计算机可读存储介质中存储的程序代码时,可以执行根据本发明实施例的显示控制方法的操作。
第四实施例
根据本发明第四实施例,提供了另一种显示控制装置,适用于执行第三实施例的显示控制方法。图11是示出该显示控制装置的结构框图,如图11所示,显示控制装置110包括:获取单元111,计算单元113,以及控制单元115。
具体地,获取单元111用于获取终端的移动数据;可以是获取终端沿屏幕方向的加速度、终端沿屏幕方向的旋转角度和加速度,还可以是获取终端的屏幕与用户参照位置之间的连线与终端的屏幕相交的角度变化值。该获取单元111可以通过上述的计算计、陀螺仪来实现。
计算单元113用于根据移动数据,计算终端与用户之间的相对位移。具体地,根据加速度,计算终端沿屏幕方向的位移,作为终端与用户之间的相对位移;或者根据加速度和旋转角度,计算终端沿屏幕方向的位移,作为终端与用户之间的相对位移。该计算单元113可以通过计算器来实现。
控制单元115用于控制终端的显示屏上显示的内容向与相对位移相反的方向移动。控制单元115可以通过控制器来实现。
该装置各个结构的细节,可以参照上述的第三实施例来理解和实施,在此不再赘述。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明保护的范围之内。

Claims (20)

  1. 一种显示控制方法,其特征在于,包括:
    连续采集人脸图像,并提取所述人脸图像中包含的人脸基准点,其中,所述人脸图像包括:包含至少一个第一人脸基准点的第一人脸图像和包含至少一个第二人脸基准点的第二人脸图像;
    基于所述第一人脸基准点和第二人脸基准点,获取人脸位置偏移值;以及
    基于所述人脸位置偏移值,获取显示屏上显示的内容的显示位置偏移值,并根据所述显示位置偏移值进行显示控制。
  2. 根据权利要求1所述的方法,其特征在于,所述获取人脸位置偏移值包括:
    获取图像坐标,所述图像坐标包括所述第一人脸基准点对应的第一图像坐标以及所述第二人脸基准点对应的第二图像坐标;
    根据所述图像坐标获取所述人脸基准点在摄像头坐标系中的空间坐标,所述空间坐标包括所述第一人脸基准点对应的第一空间坐标以及所述第二人脸基准点对应的第二空间坐标;
    将所述第二空间坐标相对于所述第一空间坐标的坐标偏移值作为所述人脸位置偏移值。
  3. 根据权利要求2所述的方法,其特征在于,所述人脸图像中包含有至少两个人脸基准点,在获取所述人脸图像之前,所述方法还包括:
    获取输入的至少两个人脸基准点之间的长度数据。
  4. 根据权利要求3所述的方法,其特征在于,所述获取空间坐标的过程包括:
    获取所述图像坐标在所述摄像头坐标系中的归一化坐标;
    根据所述归一化坐标,计算所述摄像头坐标系中的原点到所述人脸基准点的单位向量,并获取所述单位向量之间的夹角;
    根据所述人脸基准点之间的所述长度数据以及所述单位向量之间的夹角,计算所述原点到所述人脸基准点的距离;以及
    根据所述原点到所述人脸基准点的距离以及所述单位向量,计算所述人脸基准点在所述摄像头坐标系中对应的所述空间坐标。
  5. 根据权利要求1所述的方法,其特征在于,所述获取人脸位置偏移值包括:
    获取图像坐标,所述图像坐标包括所述第一人脸基准点对应的第一图像坐标以及所述第二人脸基准点对应的第二图像坐标;
    将所述第二图像坐标相对于所述第一图像坐标的坐标偏移值,作为所述人脸位置偏移值。
  6. 根据权利要求1或2或5所述的方法,其特征在于,所述获取显示位置偏移值包括:
    判断所述人脸位置偏移值是否大于预设偏移阈值;
    在判断结果为是时,根据所述人脸位置偏移值以及预设的最大容许偏移值,确定所述显示位置偏移值。
  7. 根据权利要求6所述的方法,其特征在于,所述人脸位置偏移值包括如下至少之一:X坐标偏移值、Y坐标偏移值、以及距离偏移值;所述预设阈值包括如下至少之一:X坐标偏移阈值,Y坐标偏移阈值、以及距离偏移阈值;所述最大容许偏移值包括:X轴方向最大偏移值、Y轴方向最大偏移值;所述确定所述显示位置偏移值包括:
    将所述X坐标偏移值和所述X轴方向最大偏移值中的绝对值较小者,作为X轴方向的所述显示位置偏移值;和/或
    将所述Y坐标偏移值和所述Y轴方向最大偏移值中的绝对值较小者,作为Y轴方向的所述显示位置偏移值。
  8. 根据权利要求1所述的方法,其特征在于,
    所述获取所述人脸位置偏移值包括:确定所述第二人脸基准点相对于所述第一人脸基准点的位置偏移的移动速度和/或加速度信息;
    所述获取所述显示位置偏移值包括:根据所述移动速度和/或加速度信息,确定显示位置偏移调整速度。
  9. 根据上述权利要求中任一项所述的方法,其特征在于,所述人脸图像是在预定周期内或者以预定时间间隔获取的多个人脸图像中的相邻的两个人脸图像。
  10. 根据上述权利要求中任一项所述的方法,其特征在于,进行所述显示控制的方式包括以下至少之一:平移处理、放大处理、缩小处理。
  11. 一种显示控制装置,其特征在于,包括:
    图像采集单元,用于连续采集包含有人脸基准点的人脸图像,其中,所述人脸图像包括:第一人脸图像和第二人脸图像;
    提取单元,用于提取所述人脸基准点,其中,所述人脸基准点包括:包含于所述第一人脸图像中的第一人脸基准点和包含于所述第二人脸图像中的第二人脸基准点;
    第一计算单元,用于根据所述提取单元提取的所述人脸基准点,计算人脸位置偏移值;
    第二计算单元,用于根据所述人脸位置偏移值,计算显示屏上显示的内容的显示位置偏移值;
    显示控制单元,用于根据所述显示位置偏移值,进行显示控制。
  12. 根据权利要求11所述的装置,其特征在于,所述第一计算单元包括:
    图像坐标计算单元,用于获取图像坐标,所述图像坐标包括所述第一人脸基准点对应的第一图像坐标以及所述第二人脸基准点对应的第二图像坐标;
    空间坐标获取单元,用于根据所述图像坐标获取所述人脸基准点在摄像头坐标系中的空间坐标,所述空间坐标包括所述第一人脸基准点对应的第一空间 坐标以及所述第二人脸基准点对应的第二空间坐标;
    偏移值计算单元,计算所述第二空间坐标相对于所述第一空间坐标的坐标偏移值,作为所述人脸位置偏移值。
  13. 一种显示控制方法,其特征在于,包括:
    获取终端的移动数据;
    根据所述移动数据,确定所述终端与用户之间的相对位移;以及
    控制所述终端的显示屏上显示的内容向与所述相对位移相反的方向移动。
  14. 根据权利要求13所述的方法,其特征在于,所述获取终端的移动数据包括:
    获取所述终端沿屏幕方向的加速度;或者
    获取所述终端沿屏幕方向的加速度和旋转角度。
  15. 根据权利要求14所述的方法,其特征在于,确定所述终端与用户之间的所述相对位移包括:
    根据所述加速度,计算所述终端沿屏幕方向的位移,作为所述终端与用户之间的所述相对位移;或者
    根据所述加速度和所述旋转角度,计算所述终端沿屏幕方向的位移,作为所述终端与用户之间的所述相对位移。
  16. 根据权利要求13所述的方法,其特征在于,所述获取终端的移动数据包括:
    计算所述终端的屏幕与用户参照位置之间的连线与所述终端的屏幕相交的角度变化值。
  17. 根据权利要求13至16中任一项所述的方法,其特征在于,所述控制所述终端的显示屏上显示的内容向与所述相对位移相反的方向移动包括:
    将所述相对位移与预设阈值相比较,并根据比较结果,控制所述终端的显 示屏上显示的内容移动显示位移;
    其中,所述显示位移的方向与所述相对位移的方向相反;当所述相对位移大于所述预设阈值时,所述显示位移小于所述相对位移,当所述相对位移小于或等于预设阈值时,所述显示位移与所述移动位移的绝对值的比例小于1。
  18. 一种显示控制装置,其特征在于,包括:
    获取单元,用于获取终端的移动数据;
    计算单元,用于根据所述移动数据,计算所述终端与用户之间的相对位移;以及
    控制单元,用于控制所述终端的显示屏上显示的内容向与所述相对位移相反的方向移动。
  19. 根据权利要求18所述的装置,其特征在于,所述获取单元用于获取以下至少之一:所述终端沿屏幕方向的加速度、所述终端沿屏幕方向的旋转角度。
  20. 根据权利要求19所述的装置,其特征在于,所述计算单元用于根据所述加速度,计算所述终端沿屏幕方向的位移,作为所述终端与用户之间的所述相对位移;或者根据所述加速度和所述旋转角度,计算所述终端沿屏幕方向的位移,作为所述终端与用户之间的所述相对位移。
PCT/CN2016/084586 2015-07-09 2016-06-02 显示控制方法及装置 WO2017005070A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/854,633 US10635892B2 (en) 2015-07-09 2017-12-26 Display control method and apparatus

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201510400254.7A CN106339070B (zh) 2015-07-09 2015-07-09 一种显示控制方法,及移动终端
CN201510400254.7 2015-07-09
CN201510433741.3 2015-07-22
CN201510433741.3A CN106371552B (zh) 2015-07-22 2015-07-22 一种在移动终端进行媒体展示的控制方法及装置
CN201510731860.7A CN106648344B (zh) 2015-11-02 2015-11-02 一种屏幕内容调整方法及其设备
CN201510731860.7 2015-11-02

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/854,633 Continuation US10635892B2 (en) 2015-07-09 2017-12-26 Display control method and apparatus

Publications (1)

Publication Number Publication Date
WO2017005070A1 true WO2017005070A1 (zh) 2017-01-12

Family

ID=57684827

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/084586 WO2017005070A1 (zh) 2015-07-09 2016-06-02 显示控制方法及装置

Country Status (2)

Country Link
US (1) US10635892B2 (zh)
WO (1) WO2017005070A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI706332B (zh) * 2018-09-29 2020-10-01 香港商阿里巴巴集團服務有限公司 圖形編碼展示方法和裝置以及電腦設備
CN113645502A (zh) * 2020-04-27 2021-11-12 海信视像科技股份有限公司 一种动态调整控件的方法及显示设备

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017005070A1 (zh) * 2015-07-09 2017-01-12 重庆邮电大学 显示控制方法及装置
CN107865473B (zh) * 2016-09-26 2019-10-25 华硕电脑股份有限公司 人体特征测距装置及其测距方法
CN110377201A (zh) * 2019-06-05 2019-10-25 平安科技(深圳)有限公司 终端设备控制方法、装置、计算机装置及可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102098373A (zh) * 2009-12-10 2011-06-15 Tcl集团股份有限公司 一种移动终端及其屏幕显示方法和装置
CN102841725A (zh) * 2011-06-21 2012-12-26 鸿富锦精密工业(深圳)有限公司 电子装置及其屏幕信息调整方法
CN103019507A (zh) * 2012-11-16 2013-04-03 福州瑞芯微电子有限公司 一种基于人脸跟踪改变视点角度显示三维图形的方法
CN103176623A (zh) * 2011-12-20 2013-06-26 中国电信股份有限公司 移动终端阅读防抖方法、装置和移动终端
CN103885593A (zh) * 2014-03-14 2014-06-25 深圳市中兴移动通信有限公司 一种手持终端及其屏幕防抖方法和装置
CN104394452A (zh) * 2014-12-05 2015-03-04 宁波菊风系统软件有限公司 一种智能移动终端的浸入式视频呈现方法
CN104464579A (zh) * 2013-09-12 2015-03-25 中兴通讯股份有限公司 数据显示方法、装置及终端、显示控制方法及装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5680481A (en) * 1992-05-26 1997-10-21 Ricoh Corporation Facial feature extraction method and apparatus for a neural network acoustic and visual speech recognition system
US7221809B2 (en) * 2001-12-17 2007-05-22 Genex Technologies, Inc. Face recognition system and method
US7317816B2 (en) * 2003-08-19 2008-01-08 Intel Corporation Enabling content-based search of objects in an image database with reduced matching
WO2009117607A1 (en) * 2008-03-19 2009-09-24 The Trustees Of Columbia University In The City Of New York Methods, systems, and media for automatically classifying face images
US8811726B2 (en) * 2011-06-02 2014-08-19 Kriegman-Belhumeur Vision Technologies, Llc Method and system for localizing parts of an object in an image for computer vision applications
JP5935804B2 (ja) * 2011-09-01 2016-06-15 旭硝子株式会社 反射型マスクブランク及び反射型マスクブランクの製造方法
US20150235073A1 (en) * 2014-01-28 2015-08-20 The Trustees Of The Stevens Institute Of Technology Flexible part-based representation for real-world face recognition apparatus and methods
WO2015186054A1 (en) * 2014-06-02 2015-12-10 Xlabs Pty Ltd Pose-invariant eye-gaze tracking using a single commodity camera
WO2017005070A1 (zh) * 2015-07-09 2017-01-12 重庆邮电大学 显示控制方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102098373A (zh) * 2009-12-10 2011-06-15 Tcl集团股份有限公司 一种移动终端及其屏幕显示方法和装置
CN102841725A (zh) * 2011-06-21 2012-12-26 鸿富锦精密工业(深圳)有限公司 电子装置及其屏幕信息调整方法
CN103176623A (zh) * 2011-12-20 2013-06-26 中国电信股份有限公司 移动终端阅读防抖方法、装置和移动终端
CN103019507A (zh) * 2012-11-16 2013-04-03 福州瑞芯微电子有限公司 一种基于人脸跟踪改变视点角度显示三维图形的方法
CN104464579A (zh) * 2013-09-12 2015-03-25 中兴通讯股份有限公司 数据显示方法、装置及终端、显示控制方法及装置
CN103885593A (zh) * 2014-03-14 2014-06-25 深圳市中兴移动通信有限公司 一种手持终端及其屏幕防抖方法和装置
CN104394452A (zh) * 2014-12-05 2015-03-04 宁波菊风系统软件有限公司 一种智能移动终端的浸入式视频呈现方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI706332B (zh) * 2018-09-29 2020-10-01 香港商阿里巴巴集團服務有限公司 圖形編碼展示方法和裝置以及電腦設備
US11170188B2 (en) 2018-09-29 2021-11-09 Alibaba Group Holding Limited Method and apparatus for presenting graphic codes
CN113645502A (zh) * 2020-04-27 2021-11-12 海信视像科技股份有限公司 一种动态调整控件的方法及显示设备

Also Published As

Publication number Publication date
US20180121711A1 (en) 2018-05-03
US10635892B2 (en) 2020-04-28

Similar Documents

Publication Publication Date Title
US11321870B2 (en) Camera attitude tracking method and apparatus, device, and system
US10638046B2 (en) Wearable device, control apparatus, photographing control method and automatic imaging apparatus
US11921293B2 (en) Head-mounted display, head-mounted display linking system, and method for same
CN109712224B (zh) 虚拟场景的渲染方法、装置及智能设备
WO2017005070A1 (zh) 显示控制方法及装置
US11276183B2 (en) Relocalization method and apparatus in camera pose tracking process, device, and storage medium
CN111372126B (zh) 视频播放方法、装置及存储介质
US20150271402A1 (en) Panoramic scene capturing and browsing mobile device, system and method
EP3629133B1 (en) Interface interaction apparatus and method
US20100053322A1 (en) Detecting ego-motion on a mobile device displaying three-dimensional content
CN110333834B (zh) 帧频调整方法及装置、显示设备、计算机可读存储介质
CN111768454B (zh) 位姿确定方法、装置、设备及存储介质
CN109166150B (zh) 获取位姿的方法、装置存储介质
KR20160061133A (ko) 이미지 표시 방법 및 그 전자 장치
CN110986930A (zh) 设备定位方法、装置、电子设备及存储介质
JP2018180051A (ja) 電子機器及びその制御方法
CN111897429A (zh) 图像显示方法、装置、计算机设备及存储介质
CN111147743A (zh) 摄像头控制方法及电子设备
CN113384880A (zh) 虚拟场景显示方法、装置、计算机设备及存储介质
CN109831817B (zh) 终端控制方法、装置、终端及存储介质
CN111031246A (zh) 拍摄方法及电子设备
CN109302563B (zh) 防抖处理方法、装置、存储介质及移动终端
CN111127541B (zh) 车辆尺寸的确定方法、装置及存储介质
CN108196701B (zh) 确定姿态的方法、装置及vr设备
WO2015078189A1 (zh) 一种界面调整方法及移动设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16820721

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16820721

Country of ref document: EP

Kind code of ref document: A1