WO2016155226A1 - 显示控制方法及装置、电子设备 - Google Patents

显示控制方法及装置、电子设备 Download PDF

Info

Publication number
WO2016155226A1
WO2016155226A1 PCT/CN2015/088685 CN2015088685W WO2016155226A1 WO 2016155226 A1 WO2016155226 A1 WO 2016155226A1 CN 2015088685 W CN2015088685 W CN 2015088685W WO 2016155226 A1 WO2016155226 A1 WO 2016155226A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
observed object
observed
clearly observe
determining
Prior art date
Application number
PCT/CN2015/088685
Other languages
English (en)
French (fr)
Inventor
唐明勇
刘华一君
陈涛
Original Assignee
小米科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 小米科技有限责任公司 filed Critical 小米科技有限责任公司
Priority to RU2016111926A priority Critical patent/RU2648593C2/ru
Priority to KR1020167008479A priority patent/KR101909721B1/ko
Priority to JP2016521986A priority patent/JP6263617B2/ja
Priority to MX2016002722A priority patent/MX355082B/es
Priority to BR112016006044A priority patent/BR112016006044A2/pt
Publication of WO2016155226A1 publication Critical patent/WO2016155226A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/103Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining refraction, e.g. refractometers, skiascopes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/14Display of multiple viewports
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0261Improving the quality of display appearance in the context of movement of objects on the screen or movement of the observer relative to the screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/045Zooming at least part of an image, i.e. enlarging it or shrinking it
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/12Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/14Solving problems related to the presentation of information to be displayed
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • G09G3/36Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source using liquid crystals

Definitions

  • the present disclosure relates to the field of smart device technologies, and in particular, to a display control method and apparatus, and an electronic device.
  • the present disclosure provides a display control method and apparatus, and an electronic device to solve the deficiencies in the related art.
  • a display control method including:
  • the acquired image of the observed object is enlarged and displayed.
  • it also includes:
  • the user input information includes the vision information
  • it also includes:
  • the user's line of sight feature is captured and an object matching the line of sight feature within the user's field of view is used as the object to be observed.
  • the determining whether the observed object belongs to an object that the user can clearly observe includes:
  • the observed object belongs to an object that the user can clearly observe according to whether the separation distance satisfies the preset visible distance or greater; wherein, if it is satisfied, it does not belong.
  • the determining whether the observed object belongs to an object that the user can clearly observe includes:
  • the observed object belongs to an object that the user can clearly observe according to whether the observed object contains detailed information; wherein, if it is included, it does not belong.
  • the determining whether the observed object belongs to an object that the user can clearly observe according to whether the observed object includes detailed information includes:
  • the determining whether the observed object belongs to an object that the user can clearly observe according to whether the observed object includes detailed information includes:
  • the determining whether the observed object belongs to an object that the user can clearly observe includes:
  • the observed object belongs to an object that the user can clearly observe according to whether the duration or the line-of-sight focusing time satisfies greater than or equal to a preset duration; wherein, if satisfied, the object does not belong.
  • the determining whether the observed object belongs to an object that the user can clearly observe includes:
  • the image of the observed object to be acquired is displayed in an enlarged manner, including:
  • An image of the observed object is magnified and displayed in the display area.
  • it also includes:
  • the display area is moved according to the line of sight movement event, and an image displayed in the display area is updated and enlarged according to an enlargement ratio of the image of the observed object.
  • it also includes:
  • a display control apparatus includes:
  • the judging unit judges whether the observed object belongs to an object that the user can clearly observe according to the visual acuity information of the user;
  • the display unit enlarges and displays the acquired image of the observed object if the determination result is not.
  • it also includes:
  • the user input information includes the vision information
  • the vision information associated with the registered account is read.
  • it also includes:
  • the capturing unit captures a line-of-sight feature of the user and uses an object matching the line-of-sight feature within the user's field of view as the observed object.
  • the determining unit includes:
  • a distance obtaining subunit obtaining a separation distance between the observed object and a user
  • the distance determining sub-unit determines whether the observed object belongs to an object that the user can clearly observe according to whether the separation distance satisfies a preset visible distance or greater; wherein, if satisfied, the object does not belong.
  • the determining unit includes:
  • the detail determining sub-unit determines whether the observed object belongs to an object that the user can clearly observe according to whether the observed object contains detailed information; wherein, if it is included, it does not belong.
  • the detail determining subunit includes:
  • a change obtaining module acquiring a degree of change of a preset pixel feature parameter value of the observed object
  • the first change judging module determines whether the observed object includes the detail information according to whether the degree of change of the preset pixel feature parameter value is greater than or equal to the preset change degree; wherein, if satisfied, the inclusion is included.
  • the detail determining subunit includes:
  • An object acquisition module that acquires surrounding objects of the observed object
  • the second change judging module determines whether the observed object contains detailed information according to whether the degree of change of the preset pixel characteristic parameter value of the observed object is consistent with the surrounding object; wherein, if not, the inclusion is included.
  • the determining unit includes:
  • Time acquisition sub-unit obtaining the duration of the user observing the same object, or the line-of-sight focusing time when observing the same object;
  • the time judging subunit determines whether the observed object belongs to an object that the user can clearly observe according to whether the duration or the line-of-sight focusing time satisfies greater than or equal to a preset duration; wherein, if satisfied, the object does not belong.
  • the determining unit includes:
  • the type determining sub-unit determines whether the observed object belongs to an object that the user can clearly observe according to whether the observed object includes a predetermined type of information element; wherein, if it is included, it does not belong.
  • the display unit includes:
  • the area display sub-unit displays a display area of a preset specification
  • the processing subunit is enlarged, and an image of the observed object is enlarged and displayed in the display area.
  • it also includes:
  • a monitoring unit that monitors a user's line of sight movement event
  • an updating unit moving the display area according to the line of sight movement event, and updating and magnifying the image displayed in the display area according to an enlargement ratio of the image of the observed object.
  • it also includes:
  • the unit is cancelled, and when the user's line of sight moves out of the range of the observed object, the enlarged display of the image of the observed object is cancelled.
  • an electronic device including:
  • a memory for storing processor executable instructions
  • processor is configured to:
  • the acquired image of the observed object is enlarged and displayed.
  • the disclosure can accurately determine whether the user can clearly observe the observed object; at the same time, the user can clearly observe the observed object that cannot be clearly observed by the user, and the user can clearly observe the object. No need to be close to the object being observed.
  • FIG. 1 is a flow chart showing a display control method according to an exemplary embodiment.
  • 2-4 are schematic diagrams of acquiring visual information of a user, according to an exemplary embodiment.
  • FIG. 5 is a schematic diagram of determining an object to be observed, according to an exemplary embodiment.
  • 6-7 are schematic diagrams illustrating whether an observed object can be clearly observed by a user, according to an exemplary embodiment.
  • FIGS. 8-10 are schematic diagrams showing an enlarged view of an observed object, according to an exemplary embodiment.
  • 11-22 are block diagrams of a display control device, according to an exemplary embodiment.
  • FIG. 23 is a schematic structural diagram of an apparatus for display control according to an exemplary embodiment.
  • FIG. 1 is a flowchart of a display control method according to an exemplary embodiment. As shown in FIG. 1, the method may include the following steps.
  • step 102 based on the visual acuity information of the user, it is determined whether the observed object belongs to an object that the user can clearly observe.
  • step 104 if not, the acquired image of the observed object is enlarged and displayed.
  • the user's vision information refers to the ability of the user's retina to distinguish images. Due to differences in vision of different users, for example, some users may have problems such as myopia, astigmatism or farsightedness, which may affect the user's observation process of the observed object.
  • the technical solution of the present disclosure can be applied to a wearable device, such as smart glasses, etc., by determining the visual information of the user and the observed object, it can automatically guess whether the observed object can be clearly observed by the user, thereby performing automation. Zoom in to display processing.
  • the electronic device has information processing and image display capabilities (such as a display screen), can identify the observed object, and can judge the user's observation of the observed object, and can be used in the technical solution of the present disclosure.
  • the smart glasses can actively optometry of the user's eyes to obtain the user's vision information.
  • the computer optometry process in the prior art can be used, and the principle is "retina retinoscopy".
  • the smart glasses can use a retinoscope to project a beam of light onto the refractive system of the user's eye and directly reach the retina, and then the reflected light of the retina reaches the retinoscope. Detection of vision information.
  • the user's eyeball in the optometry process, can be loosely adjusted by the infrared light source and the automatic fog vision device, and the diopter of the user's eyes is checked by photoelectric technology and automatic control technology.
  • the smart glasses may input the visual acuity information according to the received user input information.
  • an input interface for visual information is displayed, and the input interface includes an input keyboard, and by capturing a movement condition of the user's eyeball, determining a button for focusing the user's line of sight on the input keyboard, thereby completing information input.
  • the smart glasses can be associated with other devices such as the user's smart phone, and the user can input his or her own vision information directly in the smart phone interface shown in FIG. 3.
  • the smart glasses may read the vision information associated with the logged-in account according to the logged-in account of the user.
  • the user can log in to the smart glasses, and the account has a one-to-one correspondence with the user; therefore, if the user stores various information such as his or her vision information, height, weight, etc. in the cloud in advance, Then, the smart glasses can read the user's vision information through the Internet according to the current registered account.
  • the "user field of view” can be understood as the viewing range of the user's eyes.
  • the user's field of view is a sector-shaped area in the plan view direction.
  • the smart glasses are located in front of the user's eyes, in line with the user's eyes, and can follow the user's head. Rotation and subsequent rotation, it can be considered that the image acquisition range of the camera on the smart glasses is basically consistent with the user's field of view.
  • the smart glasses can capture the line-of-sight feature of the user by recording the rotation of the user's eyes, thereby using an object matching the line-of-sight feature within the user's field of view as the object to be observed.
  • an object at the intersection of the left eye line of sight and the right eye line of sight is captured by the user's left eye line of sight and the right eye line of sight as an object to be observed in the present disclosure.
  • the separation distance between the observed object and the user may be acquired, and according to whether the separation distance satisfies the preset visible distance, it is determined whether the observed object belongs to an object that the user can clearly observe.
  • the smart glasses determine the observed object A in the field of view of the user by capturing the line-of-sight feature of the user, and measure the distance between the observed object A and the user as L1; and, the smart glasses The observed object B within the user's field of view is determined, and the distance between the observed object B and the user is measured as L2.
  • the user has a myopia problem and can determine that the farthest distance that the user can clearly observe is L0 based on the user's vision information, and L1 ⁇ L0 ⁇ L2, it is determined that the observed object A can be clearly observed by the user, and does not need The enlarged display is performed, and the observed object B cannot be clearly observed by the user, and an enlarged display is required.
  • the user has a far vision problem and based on the user's vision information, it can be determined that the closest distance that the user can clearly observe is L0', and L1 ⁇ L0 ⁇ L2, then it is determined that the observed object A cannot be clearly observed by the user, and needs The magnified display is performed, and the observed object B can be clearly observed by the user, and no enlarged display is required.
  • whether or not the observed object belongs to an object that the user can clearly observe may be determined according to whether the observed object contains detailed information.
  • the magnified display does not actually make sense; and if the observed object contains details Information, such as the object B shown in Figure 7, is a picture, which can be enlarged to show the details of the picture. Therefore, in the technical solution of the embodiment, the key is to determine the case where the details are included in the observed object.
  • the smart glasses can acquire the degree of change of the preset pixel feature parameter value of the observed object, and then determine whether the observed object contains the detail information according to whether the degree of change of the preset pixel feature parameter value satisfies greater than or equal to the preset change degree.
  • the pixel feature parameter value may be any one or more attribute values of each pixel, such as a gray value, a color value of each color channel, and the like. Taking “white wall” and “drawing” shown in Figure 7 as an example, all pixels on the former image are “white”, and the corresponding gray value (here, "gray value” is taken as an example, and other The type of pixel feature parameter values are the same, there is no change in the gray value, so that it does not contain detailed information; and for the latter, because the "paint" contains many details, such as color changes, etc., obviously leads to each pixel There are differences and changes in the gray value, and the degree of change is large (for example, greater than or equal to the preset degree of change), thereby determining that it contains detailed information.
  • the smart glasses can acquire the surrounding objects of the observed object, and determine whether the observed object contains the detailed information according to whether the degree of change of the preset pixel characteristic parameter values of the observed object is consistent with the surrounding objects.
  • FIG. 7 is still taken as an example.
  • the degree of change of the preset pixel characteristic parameter value is consistent with the surrounding object, which means that the observed object A and its surrounding objects together constitute a large volume.
  • the user does not have the need to observe the details, so it can be determined that it does not contain detailed information; and for the observed object B, since it is a picture hanging on a white wall, the pixel characteristic parameter value is preset.
  • the degree of change is not consistent with the surrounding objects, and there is a large degree of change (such as greater than or equal to the preset degree of change) to determine that it contains detailed information.
  • the user when the user observes or focuses on a certain object for a long time, the user may wish to view it carefully, or may wish to focus on the line of sight because it cannot be seen clearly, indicating that the user wishes to view the current
  • the observed object is observed in detail, so it needs to be enlarged.
  • whether the observed object belongs to an object that the user can clearly observe may be determined according to whether the observed object includes a preset type of information element.
  • the information elements included in the observed object may be further identified, such as whether characters, icons (such as the logo of the product) are included, and if included, it is determined that the object cannot be The user clearly observes and needs to zoom in on the display.
  • any combination of one or more of them can be applied to accurately judge whether the observed object can be clearly observed by the user.
  • a display area of a preset size may be displayed, and an image of the object to be observed is enlarged and displayed in the display area.
  • FIG. 7 is a schematic diagram of imaging of the user's field of view in the eyes of the user
  • FIG. 8 may be a schematic diagram of the smart glasses displaying the observed object B in an enlarged manner.
  • a display area may be displayed in the image, and the display area may be a "magnifying glass" or other form as shown in FIG. 8, and the disclosure is not limited thereto.
  • the dot pattern on the left side skirt of the person in the picture can be seen, and the user cannot directly observe similar details.
  • the user can control the scaling of the observed object, for example, after the user gives an instruction to enlarge the scale of the smart glasses, the schematic diagram shown in FIG. 9 is obtained.
  • FIG. 9 simultaneously enlarges the display area and the display content belonging to the observed object B in the display area; of course, it is also possible to enlarge only the display content in the display area, but the increase of the display area can ensure sufficient
  • the display area is convenient for the user to observe.
  • the smart glasses can monitor the user's line of sight movement event, and move the display area according to the line of sight movement event, and update and enlarge the image displayed in the display area according to the magnification ratio of the image of the observed object. For example, as shown in FIG. 10, when the user's line of sight moves to the right side, the display area moves accordingly, and the display content located in the display area also has corresponding updates, such as the "left side skirt" shown in FIG. Updated to the "right skirt" shown in Figure 10.
  • the “display area” in the form of “magnifying glass” shown in Figure 8-10 is not required.
  • the smart glasses can only enlarge and display the display content, and the entire display screen of the smart glasses is used as the “display here”. region”.
  • the present disclosure also provides an embodiment of the display control device.
  • FIG. 11 is a block diagram of a display control device according to an exemplary embodiment. Referring to Figure 11, the device package A judging unit 1101 and a display unit 1102 are included.
  • the determining unit 1101 is configured to determine, according to the vision information of the user, whether the observed object belongs to an object that the user can clearly observe;
  • the display unit 1102 is configured to display the acquired image of the observed object in an enlarged manner if the determination result is not.
  • FIG. 12 is a block diagram of another display control device according to an exemplary embodiment.
  • the device may further include: an acquiring unit 1103. .
  • the obtaining unit 1103 is configured to acquire the visual information of the user by:
  • the user input information includes the vision information
  • the vision information associated with the registered account is read.
  • FIG. 13 is a block diagram of another display control device according to an exemplary embodiment.
  • the device may further include: determining unit 1104 on the basis of the foregoing embodiment shown in FIG. 11 .
  • the determining unit 1104 is configured to determine a current user field of view
  • the capturing unit 1105 is configured to capture a line-of-sight feature of the user and to view an object matching the line-of-sight feature within the user's field of view as the observed object.
  • the configuration of the determining unit 1104 and the capturing unit 1105 in the device embodiment shown in FIG. 13 may also be included in the foregoing device embodiment of FIG. 12, and the disclosure is not limited thereto.
  • FIG. 14 is a block diagram of another display control apparatus according to an exemplary embodiment.
  • the embodiment may include: a distance acquisition sub- Unit 1101A and distance determination sub-unit 1101B.
  • the distance obtaining subunit 1101A is configured to acquire a separation distance between the observed object and the user;
  • the distance determining sub-unit 1101B is configured to determine whether the observed object belongs to an object that the user can clearly observe according to whether the separation distance satisfies a preset visible distance or greater; wherein, if satisfied, the object does not belong.
  • the configurations of the distance obtaining subunit 1101A and the distance judging subunit 1101B in the device embodiment shown in FIG. 14 may also be included in the foregoing device embodiment of FIG. 12-13, and the present disclosure does not limit.
  • FIG. 15 is a block diagram of another display control apparatus according to an exemplary embodiment, which is based on the foregoing embodiment shown in FIG. 11, and the determining unit 1101 may include: a detail judgment Unit 1101C.
  • the detail determination sub-unit 1101C is configured to determine whether the observed object belongs to an object that the user can clearly observe according to whether the observed object includes detailed information; wherein, if it is included, it does not belong.
  • FIG. 16 is a block diagram of another display control apparatus according to an exemplary embodiment, which is based on the foregoing embodiment shown in FIG. 15.
  • the detail determination sub-unit 1101C includes: change acquisition. Module 1601 and first change determination module 1602.
  • the change obtaining module 1601 is configured to acquire a degree of change of a preset pixel feature parameter value of the observed object
  • the first change judging module 1602 is configured to determine whether the observed object includes detailed information according to whether the degree of change of the preset pixel feature parameter value is greater than or equal to a preset change degree; wherein, if satisfied, the content is included.
  • FIG. 17 is a block diagram of another display control apparatus according to an exemplary embodiment.
  • the detail determination sub-unit 1101C includes: object acquisition. Module 1603 and second change determination module 1604.
  • the object acquiring module 1603 is configured to acquire a surrounding object of the observed object
  • the second change judging module 1604 is configured to determine whether the observed object includes detailed information according to whether the degree of change of the preset pixel feature parameter value of the observed object is consistent with the surrounding object; wherein, if not, the inclusion is included.
  • FIG. 18 is a block diagram of another display control apparatus according to an exemplary embodiment, which is based on the foregoing embodiment shown in FIG. 11, and the determining unit 1101 may include: a time acquisition sub- Unit 1101D and time determination sub-unit 1101E.
  • the time acquisition sub-unit 1101D is configured to acquire a duration in which the user observes the same object, or a line-of-sight focus time when observing the same object;
  • the time judging subunit 1101E is configured to determine whether the observed object belongs to an object that the user can clearly observe according to whether the duration or the line-of-sight focusing time satisfies greater than or equal to a preset duration; wherein, if satisfied, the object is not belong.
  • time acquisition subunit 1101D and the time judging subunit 1101E in the device embodiment shown in FIG. 18 may also be included in the foregoing device embodiments of FIG. 12-17, and the present disclosure does not advance. Line limit.
  • FIG. 19 is a block diagram of another display control apparatus according to an exemplary embodiment, which is based on the foregoing embodiment shown in FIG. 11, and the determining unit 1101 may include: a type judging Unit 1101F.
  • the type judging subunit 1101F is configured to determine whether the observed object belongs to an object that the user can clearly observe according to whether the object includes a preset type of information element; wherein, if it is included, it does not belong.
  • the structure of the type judging subunit 1101F in the apparatus embodiment shown in FIG. 19 may also be included in the foregoing apparatus embodiment of FIG. 12-18, and the disclosure is not limited thereto.
  • FIG. 20 is a block diagram of another display control apparatus according to an exemplary embodiment.
  • the display unit 1102 includes: an area display subunit based on the foregoing embodiment shown in FIG. 1102A and amplification processing subunit 1102B.
  • the area display subunit 1102A is configured to display a display area of a preset specification
  • the enlargement processing sub-unit 1102B is configured to magnify an image of the observed object in the display area.
  • FIG. 21 is a block diagram of another display control device according to an exemplary embodiment.
  • the device may further include: a monitoring unit 1106. And update unit 1107.
  • the monitoring unit 1106 is configured to monitor a user's line of sight movement event
  • the updating unit 1107 is configured to move the display area according to the line of sight movement event, and update and enlarge the image displayed in the display area according to an enlargement ratio of the image of the observed object.
  • FIG. 22 is a block diagram of another display control apparatus according to an exemplary embodiment.
  • the apparatus may further include: canceling unit 1108 on the basis of the foregoing embodiment shown in FIG. .
  • the canceling unit 1108 is configured to cancel the enlarged display of the image of the observed object when the line of sight of the user moves out of the range of the observed object.
  • canceling unit 1108 in the device embodiment shown in FIG. 22 may also be included in the foregoing apparatus embodiment of FIGS. 12-21, and the disclosure is not limited thereto.
  • the device embodiment since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the present disclosure. Those of ordinary skill in the art can understand and implement without any creative effort.
  • the present disclosure further provides a display control apparatus, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: determine, according to the user's vision information, whether the observed object is It belongs to the object that the user can clearly observe; if it does not belong, the acquired image of the observed object is enlarged and displayed.
  • the present disclosure also provides a terminal, the terminal including a memory, and one or more programs, wherein one or more programs are stored in the memory and configured to be executed by one or more processors
  • the one or more programs include instructions for: determining, according to the visual information of the user, whether the observed object belongs to an object that the user can clearly observe; if not, acquiring the acquired image of the observed object Zoom in on the display.
  • FIG. 23 is a block diagram of an apparatus 2300 for display control, according to an exemplary embodiment.
  • device 2300 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
  • device 2300 can include one or more of the following components: processing component 2302, memory 2304, power component 2306, multimedia component 2308, audio component 2310, input/output (I/O) interface 2312, sensor component 2314, And a communication component 2316.
  • Processing component 2302 typically controls the overall operation of device 2300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • Processing component 2302 can include one or more processors 2320 to execute instructions to perform all or part of the steps of the above described methods.
  • processing component 2302 can include one or more modules to facilitate interaction between component 2302 and other components.
  • the processing component 2302 can include a multimedia module to facilitate interaction between the multimedia component 23023 and the processing component 2302.
  • Memory 2304 is configured to store various types of data to support operation at device 2300. Examples of such data include instructions for any application or method operating on device 2300, contact data, phone book data, messages, pictures, videos, and the like.
  • the memory 2304 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Disk Disk or Optical Disk.
  • Power component 2306 provides power to various components of device 2300.
  • Power component 2306 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 2300.
  • the multimedia component 2308 includes a screen between the device 2300 and the user that provides an output interface.
  • the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor may sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the multimedia component 2308 includes a front camera and/or a rear camera. When the device 2300 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 2310 is configured to output and/or input an audio signal.
  • the audio component 2310 includes a microphone (MIC) that is configured to receive an external audio signal when the device 2300 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in memory 2304 or transmitted via communication component 2316.
  • the audio component 2310 also includes a speaker for outputting an audio signal.
  • the I/O interface 2312 provides an interface between the processing component 2302 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
  • Sensor assembly 2314 includes one or more sensors for providing device 2300 with a status assessment of various aspects.
  • sensor assembly 2314 can detect an open/closed state of device 2300, relative positioning of components, such as the display and keypad of device 2300, and sensor component 2314 can also detect a change in position of one component of device 2300 or device 2300. The presence or absence of user contact with device 2300, device 2300 orientation or acceleration/deceleration, and temperature change of device 2300.
  • Sensor assembly 2314 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 2314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 2314 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 2316 is configured to facilitate wired or wireless communication between device 2300 and other devices.
  • the device 2300 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • communication component 2316 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
  • the communication component 2316 further includes near field communication (NFC) module to facilitate short-range communication.
  • NFC near field communication
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • device 2300 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • non-transitory computer readable storage medium comprising instructions, such as a memory 2304 comprising instructions executable by processor 2320 of apparatus 2300 to perform the above method.
  • the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Eye Examination Apparatus (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Endoscopes (AREA)

Abstract

一种显示控制方法及装置、电子设备,该方法包括:根据用户的视力信息,判断被观察物是否属于用户能够清楚观察的对象;若不属于,则将获取的所述被观察物的图像进行放大展示。通过该方法,可以对被观察物执行自动化的放大展示,确保用户能够清楚地观察到被观察物。

Description

显示控制方法及装置、电子设备
本申请基于申请号为CN 201510150291.7、申请日为2015年3月31日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及智能设备技术领域,尤其涉及显示控制方法及装置、电子设备。
背景技术
随着科技的发展,出现了越来越多的可穿戴设备,比如智能手环、智能眼镜等。那么,如何利用可穿戴设备的硬件特点,使其更加便利人们的日常生活,成为目前亟待解决的问题。
发明内容
本公开提供显示控制方法及装置、电子设备,以解决相关技术中的不足。
根据本公开实施例的第一方面,提供一种显示控制方法,包括:
根据用户的视力信息,判断被观察物是否属于用户能够清楚观察的对象;
若不属于,则将获取的所述被观察物的图像进行放大展示。
可选的,还包括:
通过下述方式获取用户的视力信息:
对用户的眼睛进行验光,得到所述视力信息;
或者,根据接收到的用户输入信息,且所述用户输入信息中包含所述视力信息;
或者,根据所述用户的已登录账号,读取与所述已登录账号相关联的视力信
息。
可选的,还包括:
确定当前的用户视野范围;
捕捉用户的视线特征,并将所述用户视野范围内匹配于所述视线特征的物体作为所述被观察物。
可选的,所述判断被观察物是否属于用户能够清楚观察的对象,包括:
获取所述被观察物与用户之间的间隔距离;
根据所述间隔距离是否满足大于或等于预设可视距离,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若满足则不属于。
可选的,所述判断被观察物是否属于用户能够清楚观察的对象,包括:
根据所述被观察物是否包含细节信息,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若包含则不属于。
可选的,所述根据所述被观察物是否包含细节信息,判断所述被观察物是否属于用户能够清楚观察的对象,包括:
获取所述被观察物的预设像素特征参数值的变化程度;
根据所述预设像素特征参数值的变化程度是否满足大于或等于预设变化程度,判断所述被观察物是否包含细节信息;其中,若满足则包含。
可选的,所述根据所述被观察物是否包含细节信息,判断所述被观察物是否属于用户能够清楚观察的对象,包括:
获取所述被观察物的周围物体;
根据所述被观察物的预设像素特征参数值的变化程度是否与其周围物体一致,判断所述被观察物是否包含细节信息;其中,若不一致则包含。
可选的,所述判断被观察物是否属于用户能够清楚观察的对象,包括:
获取用户观察同一物体的持续时间,或观察同一物体时的视线对焦时间;
根据所述持续时间或所述视线对焦时间是否满足大于或等于预设时长,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若满足则不属于。
可选的,所述判断被观察物是否属于用户能够清楚观察的对象,包括:
根据所述被观察物中是否包含预设类型的信息元素,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若包含则不属于。
可选的,所述将获取的所述被观察物的图像进行放大展示,包括:
显示一预设规格的展示区域;
将所述被观察物的图像进行放大展示于所述展示区域内。
可选的,还包括:
监测用户的视线移动事件;
根据所述视线移动事件对所述展示区域进行移动,并根据对所述被观察物的图像的放大比例,更新放大展示于所述展示区域内的图像。
可选的,还包括:
当用户的视线移动出所述被观察物的范围时,取消对所述被观察物的图像的放大展示。
根据本公开实施例的第二方面,提供一种显示控制装置,包括:
判断单元,根据用户的视力信息,判断被观察物是否属于用户能够清楚观察的对象;
展示单元,在判断结果为不属于的情况下,将获取的所述被观察物的图像进行放大展示。
可选的,还包括:
获取单元,通过下述方式获取用户的视力信息:
对用户的眼睛进行验光,得到所述视力信息;
或者,根据接收到的用户输入信息,且所述用户输入信息中包含所述视力信息;
或者,根据所述用户的已登录账号,读取与所述已登录账号相关联的视力信息。
可选的,还包括:
确定单元,确定当前的用户视野范围;
捕捉单元,捕捉用户的视线特征,并将所述用户视野范围内匹配于所述视线特征的物体作为所述被观察物。
可选的,所述判断单元包括:
距离获取子单元,获取所述被观察物与用户之间的间隔距离;
距离判断子单元,根据所述间隔距离是否满足大于或等于预设可视距离,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若满足则不属于。
可选的,所述判断单元包括:
细节判断子单元,根据所述被观察物是否包含细节信息,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若包含则不属于。
可选的,所述细节判断子单元包括:
变化获取模块,获取所述被观察物的预设像素特征参数值的变化程度;
第一变化判断模块,根据所述预设像素特征参数值的变化程度是否满足大于或等于预设变化程度,判断所述被观察物是否包含细节信息;其中,若满足则包含。
可选的,所述细节判断子单元包括:
物体获取模块,获取所述被观察物的周围物体;
第二变化判断模块,根据所述被观察物的预设像素特征参数值的变化程度是否与其周围物体一致,判断所述被观察物是否包含细节信息;其中,若不一致则包含。
可选的,所述判断单元包括:
时间获取子单元,获取用户观察同一物体的持续时间,或观察同一物体时的视线对焦时间;
时间判断子单元,根据所述持续时间或所述视线对焦时间是否满足大于或等于预设时长,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若满足则不属于。
可选的,所述判断单元包括:
类型判断子单元,根据所述被观察物中是否包含预设类型的信息元素,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若包含则不属于。
可选的,所述展示单元包括:
区域显示子单元,显示一预设规格的展示区域;
放大处理子单元,将所述被观察物的图像进行放大展示于所述展示区域内。
可选的,还包括:
监测单元,监测用户的视线移动事件;
更新单元,根据所述视线移动事件对所述展示区域进行移动,并根据对所述被观察物的图像的放大比例,更新放大展示于所述展示区域内的图像。
可选的,还包括:
取消单元,当用户的视线移动出所述被观察物的范围时,取消对所述被观察物的图像的放大展示。
根据本公开实施例的第三方面,提供一种电子设备,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:
根据用户的视力信息,判断被观察物是否属于用户能够清楚观察的对象;
若不属于,则将获取的所述被观察物的图像进行放大展示。
本公开的实施例提供的技术方案可以包括以下有益效果:
本公开通过获取用户的视力信息,可以准确判断出用户能否清楚地观察被观察物;同时,通过对用户无法清楚观察的被观察物进行放大展示,可以使用户对其进行清楚的观察,而无需靠近被观察物。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1是根据一示例性实施例示出的一种显示控制方法的流程图。
图2-4是根据一示例性实施例示出的一种获取用户的视力信息的示意图。
图5是根据一示例性实施例示出的一种确定被观察物的示意图。
图6-7是根据一示例性实施例示出的一种判断被观察物是否能够被用户清楚地观察的示意图。
图8-10是根据一示例性实施例示出的一种对被观察物进行放大展示的示意图。
图11-22是根据一示例性实施例示出的一种显示控制装置的框图。
图23是根据一示例性实施例示出的一种用于显示控制的装置的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
图1是根据一示例性实施例示出的一种显示控制方法的流程图,如图1所示,该方法可以包括以下步骤。
在步骤102中,根据用户的视力信息,判断被观察物是否属于用户能够清楚观察的对象。
在步骤104中,若不属于,则将获取的所述被观察物的图像进行放大展示。
其中,针对步骤102和步骤104中的各类处理特征,在本公开的技术方案中均存在多种可能的实现方式,下面分别对每一处理特征进行说明。
一、用户的视力信息
用户的视力信息是指用户的视网膜分辨影像的能力。由于不同用户的视力存在差异,比如有些用户可能存在近视、散光或远视等问题,都可能影响用户对被观察物的观察过程。
本公开的技术方案可以应用于可穿戴设备中,比如智能眼镜等,通过对用户的视力信息和被观察物的确定,可以自动推测出被观察物是否能够被用户清楚地观察,从而执行自动化的放大展示处理。
为便于说明,下述实施例中均以“智能眼镜”为例进行介绍;但本领域技术人员应该理解的是:本公开的技术方案显然也可以应用于其他非可穿戴的电子设备中,只 要该电子设备具有信息处理和图像展示能力(比如显示屏),能够对被观察物进行识别,并且可以判断用户对被观察物的观察情况,即可用于本公开的技术方案。
1、主动检测
作为一示例性实施方式,智能眼镜可以主动对用户的眼睛进行验光,得到用户的视力信息。
智能眼镜对用户进行验光时,可以采用现有技术中的电脑验光过程,其原理为“视网膜检影法”。比如图2所示,用户在佩戴了智能眼镜后,智能眼镜可以用检影镜将一束光线投射到用户眼睛的屈光系统并直达视网膜,再由视网膜的反射光抵达检影镜时,实现对视力信息的检测。
其中,在验光过程中,还可以通过红外线光源及自动雾视装置对用户的眼球进行放松调节,并且通过光电技术及自动控制技术对用户眼睛的屈光度进行检查。
当然,除“视网膜检影法”之外的其他方法显然也可以应用于本公开的技术方案中,本公开并不对此进行限制。
2、用户输入
作为另一示例性实施方式,智能眼镜可以根据接收到的用户输入信息,且所述用户输入信息中包含所述视力信息。
如果用户了解自己的视力情况,可以直接输入自己的视力信息。比如可以在智能眼镜的显示屏内,示出视力信息的输入界面,该输入界面中包含输入键盘,并通过捕捉用户的眼球运动情况,确定用户视线在输入键盘上聚焦的按键,从而完成信息输入。
或者,如图3所示,智能眼镜可以与用户的智能手机等其他设备建立关联,则用户可以直接在图3所示的智能手机界面中,输入自己的视力信息。
3、网络获取
作为又一示例性实施方式,智能眼镜可以根据用户的已登录账号,读取与该已登录账号相关联的视力信息。
如图4所示,用户可以在智能眼镜上进行账号登录,该账号与用户自身存在一一对应的关联关系;因此,如果用户事先将自己的视力信息、身高、体重等各种信息存储在云端,那么智能眼镜就可以根据当前的已登录账号,通过互联网读取到用户的视力信息。
二、被观察物
“用户视野范围”可以理解为用户眼睛的观察范围,比如图5所示,用户视野范围在俯视方向上呈一扇形区域。
智能眼镜位于用户的眼睛前方,与用户眼睛的朝向一致,并且能够随用户头部的 转动而随之发生转动,因而可以认为智能眼镜上的摄像头的图像采集范围与用户视野范围基本一致。
然而,当用户的头部和眼睛的朝向不变时,虽然用户视野范围不会发生变化,但用户的眼球发生转动时,就会导致被观察物随之变化。因此,智能眼镜可以通过对用户的眼睛转动情况的记录,捕捉用户的视线特征,从而将用户视野范围内匹配于该视线特征的物体作为被观察物。
比如在图5中,通过捕捉到用户的左眼视线和右眼视线,从而将该左眼视线和右眼视线的交汇处的物体,作为本公开技术方案中的被观察物。
三、用户是否能够清楚观察被观察物
1、基于距离的判断
作为一示例性实施方式,可以获取被观察物与用户之间的间隔距离,并根据该间隔距离是否满足大于或等于预设可视距离,判断被观察物是否属于用户能够清楚观察的对象。
如图6所示,假定智能眼镜通过捕捉用户的视线特征,确定了用户视野范围内的被观察物A,并测得该被观察物A与用户之间的间隔距离为L1;以及,智能眼镜确定了用户视野范围内的被观察物B,并测得该被观察物B与用户之间的间隔距离为L2。
那么,如果用户具有近视问题,并且基于用户的视力信息,可以确定用户能够清楚观察的最远距离为L0,且L1<L0<L2,则确定被观察物A可以被用户清楚观察到,不需要进行放大展示,且被观察物B无法被用户清楚观察到,需要进行放大展示。
类似地,如果用户具有远视问题,并且基于用户的视力信息,可以确定用户能够清楚观察的最近距离为L0’,且L1<L0<L2,则确定被观察物A无法被用户清楚观察到,需要进行放大展示,且被观察物B可以被用户清楚观察到,不需要进行放大展示。
2、基于细节的判断
作为另一示例性实施方式,可以根据被观察物是否包含细节信息,判断被观察物是否属于用户能够清楚观察的对象。
举例而言,如果被观察物并不包含任何细节信息,比如图7所示的被观察物A为一堵白色的墙面,则放大展示实际上并没有什么意义;而如果被观察物包含细节信息,比如图7所示的被观察物B为一幅画,则可以通过放大展示,方便用户仔细查看这幅画的细节。因此,本实施例的技术方案中,关键在于如果确定被观察物中包含细节的情况。
实施方式一
智能眼镜可以获取被观察物的预设像素特征参数值的变化程度,然后根据预设像素特征参数值的变化程度是否满足大于或等于预设变化程度,判断被观察物是否包含细节信息。
在本实施例中,像素特征参数值可以为每一像素的任意一种或多种属性值,比如灰度值、每个颜色通道的色彩值等。以图7所示的“白墙”和“画”为例,前者图像上的所有像素均为“白色”,则对应的灰度值(此处以“灰度值”为例,也可以为其他类型的像素特征参数值)均相同,不存在灰度值的变化,从而确定其不包含细节信息;而对于后者,由于“画”中包含很多细节,比如颜色变化等,显然会导致各个像素的灰度值存在差异和变化,且变化程度较大(比如大于或等于预设变化程度),从而确定其包含细节信息。
实施方式二
智能眼镜可以获取被观察物的周围物体,并根据被观察物的预设像素特征参数值的变化程度是否与其周围物体一致,判断被观察物是否包含细节信息。
在本实施例中,仍以图7为例。对于被观察物A,由于其属于“白墙”的一部分,则预设像素特征参数值的变化程度与周围物体一致,也就说明该被观察物A与其周围物体共同构成了一个很大体积的整体,则用户并不存在对其细节的观察需求,因而可以判定其不包含细节信息;而对于被观察物B,由于是挂在白墙上的一幅画,则预设像素特征参数值的变化程度与周围物体并不一致,并且存在较大的变化程度(比如大于或等于预设变化程度),从而确定其包含细节信息。
3、基于时间的判断
作为又一示例性实施方式,可以获取用户观察同一物体的持续时间,或观察同一物体时的视线对焦时间,并根据持续时间或视线对焦时间是否满足大于或等于预设时长,判断被观察物是否属于用户能够清楚观察的对象。
在本实施例中,当用户长时间观察或对焦于某个被观察物,说明用户可能希望对其进行仔细查看,或者可能由于无法看清而希望努力对视线进行对焦,均表明用户希望对当前的被观察物进行细节上的观察,所以需要进行放大展示。
4、基于信息元素的判断
作为又一示例性实施方式,可以根据被观察物中是否包含预设类型的信息元素,判断被观察物是否属于用户能够清楚观察的对象。
举例而言,比如智能眼镜确定了当前的被观察物后,可以进一步识别该被观察物中包含的信息元素,比如是否包含字符、图标(比如商品的Logo)等,若包含则判定其无法被用户清楚观察,需要放大展示。
需要说明的是:
以上介绍了四种方式,而实际上可以应用其中任意一种或多种的组合,从而对被观察物是否能够被用户清楚观察进行准确判断。其中,当同时采用多种方式的组合时,作为一示例性实施例,可以在其中至少一种方式的判断结果为被观察物无法被用户清楚观察的情况下,判定被观察物无法被用户清楚观察;或者,作为另一示例性实施例,可以在采用的所有方式的判断结果均为被观察物无法被用户清楚观察的情况下,判定被观察物无法被用户清楚观察。
四、对被观察物的放大展示
作为一示例性实施例,可以显示一预设规格的展示区域,并将所述被观察物的图像进行放大展示于所述展示区域内。
假定图7为用户视野范围在用户眼睛中的成像示意图,则图8可以为智能眼镜对被观察物B进行放大展示的示意图。如图8所示,可以在图像内显示一展示区域,该展示区域可以为图8所示的“放大镜”或其他形态,本公开并不对此进行限制。由图8可见,通过对被观察物B的放大展示,可以看到画中人的左侧裙摆上的点状花纹,而用户则无法直接观察到类似的细节。
用户可以控制对被观察物的缩放比例,比如用户向智能眼镜发出放大比例的指令后,得到图9所示的示意图。其中,图9同时对展示区域和该展示区域内属于被观察物B的展示内容进行了放大;当然,也可以仅对展示区域内的展示内容进行放大,但展示区域的增大可以确保足够大的展示面积,以方便用户的观察。
进一步地,智能眼镜可以监测用户的视线移动事件,并根据该视线移动事件对展示区域进行移动,且根据对被观察物的图像的放大比例,更新放大展示于展示区域内的图像。比如图10所示,当用户的视线向右侧移动时,展示区域随之发生移动,且位于展示区域内的展示内容也产生相应的更新,譬如由图9所示的“左侧裙摆”更新为图10所示的“右侧裙摆”。
当然,如果用户的视线移动出被观察物的范围,则应当取消对被观察物的图像的放大展示,因为此处的“被观察物”已经发生了变化,需要重新按照上述实施例来确定新的被观察物,并确定是否需要对其进行放大展示。
需要说明的是:图8-10所示的“放大镜”形式的“展示区域”并非必须,智能眼镜可以仅对展示内容进行放大和展示,则智能眼镜的整个显示屏均作为此处的“展示区域”。
与前述的显示控制方法的实施例相对应,本公开还提供了显示控制装置的实施例。
图11是根据一示例性实施例示出的一种显示控制装置框图。参照图11,该装置包 括判断单元1101和展示单元1102。
其中,判断单元1101,被配置为根据用户的视力信息,判断被观察物是否属于用户能够清楚观察的对象;
展示单元1102,被配置为在判断结果为不属于的情况下,将获取的所述被观察物的图像进行放大展示。
如图12所示,图12是根据一示例性实施例示出的另一种显示控制装置的框图,该实施例在前述图11所示实施例的基础上,该装置还可以包括:获取单元1103。
其中,获取单元1103,被配置为通过下述方式获取用户的视力信息:
对用户的眼睛进行验光,得到所述视力信息;
或者,根据接收到的用户输入信息,且所述用户输入信息中包含所述视力信息;
或者,根据所述用户的已登录账号,读取与所述已登录账号相关联的视力信息。
如图13所示,图13是根据一示例性实施例示出的另一种显示控制装置的框图,该实施例在前述图11所示实施例的基础上,该装置还可以包括:确定单元1104和捕捉单元1105。
其中,确定单元1104,被配置为确定当前的用户视野范围;
捕捉单元1105,被配置为捕捉用户的视线特征,并将所述用户视野范围内匹配于所述视线特征的物体作为所述被观察物。
需要说明的是,上述图13所示的装置实施例中的确定单元1104和捕捉单元1105的结构也可以包含在前述图12的装置实施例中,对此本公开不进行限制。
如图14所示,图14是根据一示例性实施例示出的另一种显示控制装置的框图,该实施例在前述图11所示实施例的基础上,判断单元1101可以包括:距离获取子单元1101A和距离判断子单元1101B。
其中,距离获取子单元1101A,被配置为获取所述被观察物与用户之间的间隔距离;
距离判断子单元1101B,被配置为根据所述间隔距离是否满足大于或等于预设可视距离,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若满足则不属于。
需要说明的是,上述图14所示的装置实施例中的距离获取子单元1101A和距离判断子单元1101B的结构也可以包含在前述图12-13的装置实施例中,对此本公开不进行限制。
如图15所示,图15是根据一示例性实施例示出的另一种显示控制装置的框图,该实施例在前述图11所示实施例的基础上,判断单元1101可以包括:细节判断子单元1101C。
其中,细节判断子单元1101C,被配置为根据所述被观察物是否包含细节信息,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若包含则不属于。
需要说明的是,上述图15所示的装置实施例中的细节判断子单元1101C的结构也可以包含在前述图12-14的装置实施例中,对此本公开不进行限制。
如图16所示,图16是根据一示例性实施例示出的另一种显示控制装置的框图,该实施例在前述图15所示实施例的基础上,细节判断子单元1101C包括:变化获取模块1601和第一变化判断模块1602。
其中,变化获取模块1601,被配置为获取所述被观察物的预设像素特征参数值的变化程度;
第一变化判断模块1602,被配置为根据所述预设像素特征参数值的变化程度是否满足大于或等于预设变化程度,判断所述被观察物是否包含细节信息;其中,若满足则包含。
如图17所示,图17是根据一示例性实施例示出的另一种显示控制装置的框图,该实施例在前述图15所示实施例的基础上,细节判断子单元1101C包括:物体获取模块1603和第二变化判断模块1604。
其中,物体获取模块1603,被配置为获取所述被观察物的周围物体;
第二变化判断模块1604,被配置为根据所述被观察物的预设像素特征参数值的变化程度是否与其周围物体一致,判断所述被观察物是否包含细节信息;其中,若不一致则包含。
如图18所示,图18是根据一示例性实施例示出的另一种显示控制装置的框图,该实施例在前述图11所示实施例的基础上,判断单元1101可以包括:时间获取子单元1101D和时间判断子单元1101E。
其中,时间获取子单元1101D,被配置为获取用户观察同一物体的持续时间,或观察同一物体时的视线对焦时间;
时间判断子单元1101E,被配置为根据所述持续时间或所述视线对焦时间是否满足大于或等于预设时长,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若满足则不属于。
需要说明的是,上述图18所示的装置实施例中的时间获取子单元1101D和时间判断子单元1101E的结构也可以包含在前述图12-17的装置实施例中,对此本公开不进 行限制。
如图19所示,图19是根据一示例性实施例示出的另一种显示控制装置的框图,该实施例在前述图11所示实施例的基础上,判断单元1101可以包括:类型判断子单元1101F。
其中,类型判断子单元1101F,被配置为根据所述被观察物中是否包含预设类型的信息元素,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若包含则不属于。
需要说明的是,上述图19所示的装置实施例中的类型判断子单元1101F的结构也可以包含在前述图12-18的装置实施例中,对此本公开不进行限制。
如图20所示,图20是根据一示例性实施例示出的另一种显示控制装置的框图,该实施例在前述图11所示实施例的基础上,展示单元1102包括:区域显示子单元1102A和放大处理子单元1102B。
其中,区域显示子单元1102A,被配置为显示一预设规格的展示区域;
放大处理子单元1102B,被配置为将所述被观察物的图像进行放大展示于所述展示区域内。
需要说明的是,上述图20所示的装置实施例中的区域显示子单元1102A和放大处理子单元1102B的结构也可以包含在前述图12-18的装置实施例中,对此本公开不进行限制。
如图21所示,图21是根据一示例性实施例示出的另一种显示控制装置的框图,该实施例在前述图20所示实施例的基础上,该装置还可以包括:监测单元1106和更新单元1107。
其中,监测单元1106,被配置为监测用户的视线移动事件;
更新单元1107,被配置为根据所述视线移动事件对所述展示区域进行移动,并根据对所述被观察物的图像的放大比例,更新放大展示于所述展示区域内的图像。
如图22所示,图22是根据一示例性实施例示出的另一种显示控制装置的框图,该实施例在前述图11所示实施例的基础上,该装置还可以包括:取消单元1108。
其中,取消单元1108,被配置为当用户的视线移动出所述被观察物的范围时,取消对所述被观察物的图像的放大展示。
需要说明的是,上述图22所示的装置实施例中的取消单元1108的结构也可以包含在前述图12-21的装置实施例中,对此本公开不进行限制。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本公开方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
相应的,本公开还提供一种显示控制装置,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:根据用户的视力信息,判断被观察物是否属于用户能够清楚观察的对象;若不属于,则将获取的所述被观察物的图像进行放大展示。
相应的,本公开还提供一种终端,所述终端包括有存储器,以及一个或者一个以上的程序,其中一个或者一个以上程序存储于存储器中,且经配置以由一个或者一个以上处理器执行所述一个或者一个以上程序包含用于进行以下操作的指令:根据用户的视力信息,判断被观察物是否属于用户能够清楚观察的对象;若不属于,则将获取的所述被观察物的图像进行放大展示。
图23是根据一示例性实施例示出的一种用于显示控制的装置2300的框图。例如,装置2300可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图23,装置2300可以包括以下一个或多个组件:处理组件2302,存储器2304,电源组件2306,多媒体组件2308,音频组件2310,输入/输出(I/O)的接口2312,传感器组件2314,以及通信组件2316。
处理组件2302通常控制装置2300的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件2302可以包括一个或多个处理器2320来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件2302可以包括一个或多个模块,便于处理组件2302和其他组件之间的交互。例如,处理组件2302可以包括多媒体模块,以方便多媒体组件23023和处理组件2302之间的交互。
存储器2304被配置为存储各种类型的数据以支持在装置2300的操作。这些数据的示例包括用于在装置2300上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器2304可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件2306为装置2300的各种组件提供电力。电源组件2306可以包括电源管理系统,一个或多个电源,及其他与为装置2300生成、管理和分配电力相关联的组件。
多媒体组件2308包括在所述装置2300和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件2308包括一个前置摄像头和/或后置摄像头。当装置2300处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件2310被配置为输出和/或输入音频信号。例如,音频组件2310包括一个麦克风(MIC),当装置2300处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器2304或经由通信组件2316发送。在一些实施例中,音频组件2310还包括一个扬声器,用于输出音频信号。
I/O接口2312为处理组件2302和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件2314包括一个或多个传感器,用于为装置2300提供各个方面的状态评估。例如,传感器组件2314可以检测到装置2300的打开/关闭状态,组件的相对定位,例如所述组件为装置2300的显示器和小键盘,传感器组件2314还可以检测装置2300或装置2300一个组件的位置改变,用户与装置2300接触的存在或不存在,装置2300方位或加速/减速和装置2300的温度变化。传感器组件2314可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件2314还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件2314还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件2316被配置为便于装置2300和其他设备之间有线或无线方式的通信。装置2300可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件2316经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件2316还包括近场通信 (NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置2300可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器2304,上述指令可由装置2300的处理器2320执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域技术人员在考虑说明书及实践这里公开的公开后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制

Claims (25)

  1. 一种显示控制方法,其特征在于,包括:
    根据用户的视力信息,判断被观察物是否属于用户能够清楚观察的对象;
    若不属于,则将获取的所述被观察物的图像进行放大展示。
  2. 根据权利要求1所述的方法,其特征在于,还包括:
    通过下述方式获取用户的视力信息:
    对用户的眼睛进行验光,得到所述视力信息;
    或者,根据接收到的用户输入信息,且所述用户输入信息中包含所述视力信息;
    或者,根据所述用户的已登录账号,读取与所述已登录账号相关联的视力信息。
  3. 根据权利要求1所述的方法,其特征在于,还包括:
    确定当前的用户视野范围;
    捕捉用户的视线特征,并将所述用户视野范围内匹配于所述视线特征的物体作为所述被观察物。
  4. 根据权利要求1所述的方法,其特征在于,所述判断被观察物是否属于用户能够清楚观察的对象,包括:
    获取所述被观察物与用户之间的间隔距离;
    根据所述间隔距离是否满足大于或等于预设可视距离,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若满足则不属于。
  5. 根据权利要求1所述的方法,其特征在于,所述判断被观察物是否属于用户能够清楚观察的对象,包括:
    根据所述被观察物是否包含细节信息,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若包含则不属于。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述被观察物是否包含细节信息,判断所述被观察物是否属于用户能够清楚观察的对象,包括:
    获取所述被观察物的预设像素特征参数值的变化程度;
    根据所述预设像素特征参数值的变化程度是否满足大于或等于预设变化程度,判断所述被观察物是否包含细节信息;其中,若满足则包含。
  7. 根据权利要求5所述的方法,其特征在于,所述根据所述被观察物是否包含细节信息,判断所述被观察物是否属于用户能够清楚观察的对象,包括:
    获取所述被观察物的周围物体;
    根据所述被观察物的预设像素特征参数值的变化程度是否与其周围物体一致,判 断所述被观察物是否包含细节信息;其中,若不一致则包含。
  8. 根据权利要求1所述的方法,其特征在于,所述判断被观察物是否属于用户能够清楚观察的对象,包括:
    获取用户观察同一物体的持续时间,或观察同一物体时的视线对焦时间;
    根据所述持续时间或所述视线对焦时间是否满足大于或等于预设时长,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若满足则不属于。
  9. 根据权利要求1所述的方法,其特征在于,所述判断被观察物是否属于用户能够清楚观察的对象,包括:
    根据所述被观察物中是否包含预设类型的信息元素,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若包含则不属于。
  10. 根据权利要求1所述的方法,其特征在于,所述将获取的所述被观察物的图像进行放大展示,包括:
    显示一预设规格的展示区域;
    将所述被观察物的图像进行放大展示于所述展示区域内。
  11. 根据权利要求10所述的方法,其特征在于,还包括:
    监测用户的视线移动事件;
    根据所述视线移动事件对所述展示区域进行移动,并根据对所述被观察物的图像的放大比例,更新放大展示于所述展示区域内的图像。
  12. 根据权利要求1所述的方法,其特征在于,还包括:
    当用户的视线移动出所述被观察物的范围时,取消对所述被观察物的图像的放大展示。
  13. 一种显示控制装置,其特征在于,包括:
    判断单元,根据用户的视力信息,判断被观察物是否属于用户能够清楚观察的对象;
    展示单元,在判断结果为不属于的情况下,将获取的所述被观察物的图像进行放大展示。
  14. 根据权利要求13所述的装置,其特征在于,还包括:
    获取单元,通过下述方式获取用户的视力信息:
    对用户的眼睛进行验光,得到所述视力信息;
    或者,根据接收到的用户输入信息,且所述用户输入信息中包含所述视力信息;
    或者,根据所述用户的已登录账号,读取与所述已登录账号相关联的视力信息。
  15. 根据权利要求13所述的装置,其特征在于,还包括:
    确定单元,确定当前的用户视野范围;
    捕捉单元,捕捉用户的视线特征,并将所述用户视野范围内匹配于所述视线特征的物体作为所述被观察物。
  16. 根据权利要求13所述的装置,其特征在于,所述判断单元包括:
    距离获取子单元,获取所述被观察物与用户之间的间隔距离;
    距离判断子单元,根据所述间隔距离是否满足大于或等于预设可视距离,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若满足则不属于。
  17. 根据权利要求13所述的装置,其特征在于,所述判断单元包括:
    细节判断子单元,根据所述被观察物是否包含细节信息,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若包含则不属于。
  18. 根据权利要求17所述的装置,其特征在于,所述细节判断子单元包括:
    变化获取模块,获取所述被观察物的预设像素特征参数值的变化程度;
    第一变化判断模块,根据所述预设像素特征参数值的变化程度是否满足大于或等于预设变化程度,判断所述被观察物是否包含细节信息;其中,若满足则包含。
  19. 根据权利要求17所述的装置,其特征在于,所述细节判断子单元包括:
    物体获取模块,获取所述被观察物的周围物体;
    第二变化判断模块,根据所述被观察物的预设像素特征参数值的变化程度是否与其周围物体一致,判断所述被观察物是否包含细节信息;其中,若不一致则包含。
  20. 根据权利要求13所述的装置,其特征在于,所述判断单元包括:
    时间获取子单元,获取用户观察同一物体的持续时间,或观察同一物体时的视线对焦时间;
    时间判断子单元,根据所述持续时间或所述视线对焦时间是否满足大于或等于预设时长,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若满足则不属于。
  21. 根据权利要求13所述的装置,其特征在于,所述判断单元包括:
    类型判断子单元,根据所述被观察物中是否包含预设类型的信息元素,判断所述被观察物是否属于用户能够清楚观察的对象;其中,若包含则不属于。
  22. 根据权利要求13所述的装置,其特征在于,所述展示单元包括:
    区域显示子单元,显示一预设规格的展示区域;
    放大处理子单元,将所述被观察物的图像进行放大展示于所述展示区域内。
  23. 根据权利要求22所述的装置,其特征在于,还包括:
    监测单元,监测用户的视线移动事件;
    更新单元,根据所述视线移动事件对所述展示区域进行移动,并根据对所述被观察物的图像的放大比例,更新放大展示于所述展示区域内的图像。
  24. 根据权利要求13所述的装置,其特征在于,还包括:
    取消单元,当用户的视线移动出所述被观察物的范围时,取消对所述被观察物的图像的放大展示。
  25. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    根据用户的视力信息,判断被观察物是否属于用户能够清楚观察的对象;
    若不属于,则将获取的所述被观察物的图像进行放大展示。
PCT/CN2015/088685 2015-03-31 2015-08-31 显示控制方法及装置、电子设备 WO2016155226A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
RU2016111926A RU2648593C2 (ru) 2015-03-31 2015-08-31 Способ и аппарат управления отображением, электронное устройство
KR1020167008479A KR101909721B1 (ko) 2015-03-31 2015-08-31 디스플레이 제어를 위한 방법 및 장치, 전자 디바이스
JP2016521986A JP6263617B2 (ja) 2015-03-31 2015-08-31 表示制御方法及び装置、電子機器
MX2016002722A MX355082B (es) 2015-03-31 2015-08-31 Metodo y aparato de control de visualizacion, dispositivo electronico.
BR112016006044A BR112016006044A2 (pt) 2015-03-31 2015-08-31 método e aparelho para controle de exibição, e, dispositivo eletrônico

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510150291.7A CN104699250B (zh) 2015-03-31 2015-03-31 显示控制方法及装置、电子设备
CN201510150291.7 2015-03-31

Publications (1)

Publication Number Publication Date
WO2016155226A1 true WO2016155226A1 (zh) 2016-10-06

Family

ID=53346460

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/088685 WO2016155226A1 (zh) 2015-03-31 2015-08-31 显示控制方法及装置、电子设备

Country Status (9)

Country Link
US (1) US9983667B2 (zh)
EP (1) EP3076382B1 (zh)
JP (1) JP6263617B2 (zh)
KR (1) KR101909721B1 (zh)
CN (1) CN104699250B (zh)
BR (1) BR112016006044A2 (zh)
MX (1) MX355082B (zh)
RU (1) RU2648593C2 (zh)
WO (1) WO2016155226A1 (zh)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699250B (zh) * 2015-03-31 2018-10-19 小米科技有限责任公司 显示控制方法及装置、电子设备
CN105786430B (zh) * 2016-02-25 2020-08-25 联想(北京)有限公司 信息处理方法及电子设备
CN106227424A (zh) * 2016-07-20 2016-12-14 北京小米移动软件有限公司 画面的显示处理方法及装置
KR101803407B1 (ko) * 2016-10-19 2017-11-30 주식회사 픽셀 디스플레이 디스플레이를 제어하기 위한 사용자 장치 및 컴퓨터 판독 가능 저장 매체에 저장된 프로그램
KR102697559B1 (ko) * 2016-12-22 2024-08-23 삼성전자주식회사 영상 표시 방법, 저장 매체 및 전자 장치
CN106774912B (zh) * 2016-12-26 2020-04-07 宇龙计算机通信科技(深圳)有限公司 一种操控vr设备的方法及装置
CN107222636A (zh) * 2017-06-28 2017-09-29 上海传英信息技术有限公司 一种基于智能终端的验光系统及验光方法
KR101889463B1 (ko) * 2017-08-03 2018-08-17 연세대학교 산학협력단 사용자 단말의 화면 표시 제어 방법 및 이를 이용한 사용자 단말
US10885613B2 (en) 2019-01-10 2021-01-05 International Business Machines Corporation Real-time alteration of underwater images
US10871384B2 (en) * 2019-02-26 2020-12-22 Thomas P. Moyer Apparatus and methods utilizing emissive patterns to determine positional information
CN109977836B (zh) * 2019-03-19 2022-04-15 维沃移动通信有限公司 一种信息采集方法及终端
JP7333460B2 (ja) * 2020-02-21 2023-08-24 マクセル株式会社 情報表示装置
CN113132642B (zh) * 2021-04-26 2023-09-26 维沃移动通信有限公司 图像显示的方法及装置、电子设备
CN113238700B (zh) * 2021-06-03 2024-04-05 艾视雅健康科技(苏州)有限公司 一种头戴式电子辅助视觉设备及其图像自动放大方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002046907A2 (en) * 2000-12-07 2002-06-13 Betacom Corporation Inc. Sight enhancement device
US6731326B1 (en) * 1999-04-06 2004-05-04 Innoventions, Inc. Low vision panning and zooming device
US20050162512A1 (en) * 2002-03-28 2005-07-28 Seakins Paul J. Low vision video magnifier
CN103914151A (zh) * 2014-04-08 2014-07-09 小米科技有限责任公司 信息显示方法及装置
CN104407437A (zh) * 2014-10-20 2015-03-11 深圳市亿思达科技集团有限公司 变焦头戴设备
CN104699250A (zh) * 2015-03-31 2015-06-10 小米科技有限责任公司 显示控制方法及装置、电子设备

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3744984B2 (ja) * 1995-10-04 2006-02-15 キヤノン株式会社 情報表示装置
JPH10191204A (ja) * 1996-12-20 1998-07-21 Canon Inc 画像処理装置
US6690351B1 (en) * 2000-04-06 2004-02-10 Xybernaut Corporation Computer display optimizer
JP4328136B2 (ja) * 2003-06-19 2009-09-09 国立大学法人 奈良先端科学技術大学院大学 関心度推定装置
CN102670163B (zh) * 2004-04-01 2016-04-13 威廉·C·托奇 控制计算装置的系统及方法
JP2006023953A (ja) * 2004-07-07 2006-01-26 Fuji Photo Film Co Ltd 情報表示システム
JP4972004B2 (ja) * 2008-01-30 2012-07-11 国立大学法人東京工業大学 画像変換方法およびプログラム
WO2010009447A2 (en) * 2008-07-18 2010-01-21 Doheny Eye Institute Optical coherence tomography - based ophthalmic testing methods, devices and systems
JP2010099335A (ja) * 2008-10-24 2010-05-06 National Institute Of Advanced Industrial Science & Technology 立体視機能検査方法
US8917286B2 (en) * 2008-11-11 2014-12-23 Sony Corporation Image processing device, information processing device, image processing method, and information processing method
JP5375481B2 (ja) * 2009-09-24 2013-12-25 ブラザー工業株式会社 ヘッドマウントディスプレイ
KR101651430B1 (ko) * 2009-12-18 2016-08-26 삼성전자주식회사 휴대용 단말기에서 출력 데이터의 사이즈를 조절하기 위한 장치 및 방법
CN102812416B (zh) * 2010-06-17 2015-10-07 松下电器(美国)知识产权公司 指示输入装置、指示输入方法、程序、记录介质以及集成电路
KR101659091B1 (ko) * 2010-08-31 2016-09-22 삼성전자주식회사 문자 콜라주 메시지 생성 장치 및 방법
JP2012073940A (ja) * 2010-09-29 2012-04-12 Sharp Corp 表示システム、データ処理装置、表示装置、指示方法、表示方法、コンピュータプログラム及び記録媒体
CN102467232A (zh) * 2010-11-15 2012-05-23 财团法人资讯工业策进会 电子装置以及应用于电子装置的显示方法
CN103886622B (zh) * 2012-12-21 2017-10-31 腾讯科技(深圳)有限公司 自动图像区域划分的实现方法及实现装置
US9406253B2 (en) * 2013-03-14 2016-08-02 Broadcom Corporation Vision corrective display
FR3005194B1 (fr) * 2013-04-25 2016-09-23 Essilor Int Procede de personnalisation d'un dispositif electronique afficheur d'image
KR102098277B1 (ko) 2013-06-11 2020-04-07 삼성전자주식회사 시선 추적을 이용한 시인성 개선 방법, 저장 매체 및 전자 장치
JP2015022580A (ja) * 2013-07-19 2015-02-02 富士通フロンテック株式会社 取引装置、支援方法および支援プログラム
US10386637B2 (en) * 2014-01-15 2019-08-20 Maxell, Ltd. Information display terminal, information display system, and information display method
US9959591B2 (en) * 2014-07-31 2018-05-01 Seiko Epson Corporation Display apparatus, method for controlling display apparatus, and program
US10042031B2 (en) * 2015-02-11 2018-08-07 Xerox Corporation Method and system for detecting that an object of interest has re-entered a field of view of an imaging device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6731326B1 (en) * 1999-04-06 2004-05-04 Innoventions, Inc. Low vision panning and zooming device
WO2002046907A2 (en) * 2000-12-07 2002-06-13 Betacom Corporation Inc. Sight enhancement device
US20050162512A1 (en) * 2002-03-28 2005-07-28 Seakins Paul J. Low vision video magnifier
CN103914151A (zh) * 2014-04-08 2014-07-09 小米科技有限责任公司 信息显示方法及装置
CN104407437A (zh) * 2014-10-20 2015-03-11 深圳市亿思达科技集团有限公司 变焦头戴设备
CN104699250A (zh) * 2015-03-31 2015-06-10 小米科技有限责任公司 显示控制方法及装置、电子设备

Also Published As

Publication number Publication date
US20160291693A1 (en) 2016-10-06
RU2016111926A (ru) 2017-10-04
MX355082B (es) 2018-04-04
CN104699250B (zh) 2018-10-19
KR101909721B1 (ko) 2018-10-18
KR20160127709A (ko) 2016-11-04
MX2016002722A (es) 2016-12-09
RU2648593C2 (ru) 2018-03-26
US9983667B2 (en) 2018-05-29
CN104699250A (zh) 2015-06-10
EP3076382A1 (en) 2016-10-05
JP6263617B2 (ja) 2018-01-17
JP2017514186A (ja) 2017-06-01
BR112016006044A2 (pt) 2017-08-01
EP3076382B1 (en) 2019-02-06

Similar Documents

Publication Publication Date Title
WO2016155226A1 (zh) 显示控制方法及装置、电子设备
US10026381B2 (en) Method and device for adjusting and displaying image
US9691256B2 (en) Method and device for presenting prompt information that recommends removing contents from garbage container
EP3096209B1 (en) Method and device for recognizing object
EP3754459B1 (en) Method and apparatus for controlling camera, device and storage medium
KR20220044467A (ko) 디스플레이 장치 및 그의 제어 방법
US20170178289A1 (en) Method, device and computer-readable storage medium for video display
US8988519B2 (en) Automatic magnification of data on display screen based on eye characteristics of user
WO2016155230A1 (zh) 播放控制方法及装置、电子设备
EP3113071A1 (en) Method and device for acquiring iris image
US20180150722A1 (en) Photo synthesizing method, device, and medium
EP3076660A1 (en) Method and apparatus for displaying framing information
KR101701814B1 (ko) 프레이밍 정보를 디스플레이하기 위한 방법 및 장치
EP3211879A1 (en) Method and device for automatically capturing photograph, electronic device
CN105266756A (zh) 瞳距测量方法、装置及终端
JP2016076850A (ja) 撮像装置、撮像装置の制御方法、撮像装置の制御プログラム
CN115766927B (zh) 测谎方法、装置、移动终端及存储介质
CN115665398B (zh) 基于虚拟现实技术的图像调整方法、装置、设备及介质
CN116700652A (zh) 图像展示方法及装置、电子设备、存储介质
CN116797984A (zh) 信息处理方法及装置、电子设备及存储介质
CN113551768A (zh) 确定环境亮度的方法及装置
CN117130468A (zh) 眼动追踪方法、装置、设备及存储介质

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: MX/A/2016/002722

Country of ref document: MX

ENP Entry into the national phase

Ref document number: 20167008479

Country of ref document: KR

Kind code of ref document: A

Ref document number: 2016111926

Country of ref document: RU

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2016521986

Country of ref document: JP

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112016006044

Country of ref document: BR

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15887183

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 112016006044

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20160318

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15887183

Country of ref document: EP

Kind code of ref document: A1