CN114697513A - Method for adjusting focusing area by using near-eye display device and near-eye display device - Google Patents

Method for adjusting focusing area by using near-eye display device and near-eye display device Download PDF

Info

Publication number
CN114697513A
CN114697513A CN202011578670.3A CN202011578670A CN114697513A CN 114697513 A CN114697513 A CN 114697513A CN 202011578670 A CN202011578670 A CN 202011578670A CN 114697513 A CN114697513 A CN 114697513A
Authority
CN
China
Prior art keywords
camera
magnification
focusing
focusing area
display device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011578670.3A
Other languages
Chinese (zh)
Inventor
罗洋
黄正宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingfisher Vision Technology Co ltd
Original Assignee
Beijing Kingfisher Vision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingfisher Vision Technology Co ltd filed Critical Beijing Kingfisher Vision Technology Co ltd
Priority to CN202011578670.3A priority Critical patent/CN114697513A/en
Publication of CN114697513A publication Critical patent/CN114697513A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a method for adjusting a focusing area by using a near-eye display device, wherein the near-eye display device comprises a camera and a display screen, and the method comprises the following steps: s1: receiving an input for adjusting the magnification; s2: according to the magnification factor, calculating a focusing area corresponding to the magnification factor in the shooting range of the camera; s3: focusing the focusing area; s4: acquiring an environment image through the camera; s5: and amplifying the part of the environment image corresponding to the focusing area and displaying the part of the environment image through the display screen. According to the method and the device, the display picture subjected to focusing adjustment can be generated in time under the specified magnification through the preset magnification and the correlation model of the focusing area, so that the visual dependence of focusing execution is obviously improved, clear imaging can be simply, conveniently and stably provided, and the method and the device are particularly suitable for providing convenience and help for people with visual disorder.

Description

Method for adjusting focusing area by using near-eye display device and near-eye display device
Technical Field
The present disclosure relates generally to the field of optoelectronic technologies, and more particularly, to a method for adjusting a focusing area by using a near-eye display device and a near-eye display device.
Background
With the rapid development of computer technology and display technology, imaging of external environments through electronic devices such as mobile phones, digital cameras and smart glasses has become increasingly popular in people's work and life. In general, in order to improve the imaging clarity, the relative position of the lens in the electronic device needs to be finely adjusted, i.e. the focusing process. In this regard, the most widely used technology is the auto-focusing technology based on digital image processing, which obtains the result according to the selected image sharpness evaluation algorithm and feedback-adjusts the focal length of the camera, thereby making the image clearer.
In most cases, one can also make the local area more prominent than the overall view by actively selecting a certain area in the current view. Taking a picture using a mobile phone with a touch-sensitive display (i.e., a "touch screen") as an example, when a person in a desired view is clear relative to other background objects, an operation of clicking the position of the person in the display of the mobile phone is usually performed, so as to obtain a more desirable imaging effect.
However, the above method of improving the image sharpness by actively selecting the focusing position can provide convenience for most people, but does not fully consider the use requirements of the visually impaired. It is often difficult for visually impaired people to finely contrast and recognize local regions in their field of vision. Therefore, this way of actively selecting the focus position is not suitable for visually impaired people, and may even cause inconvenience in use due to the fact that the operation mode thereof is excessively dependent on vision.
Therefore, there is an urgent need to develop a new focus adjustment method that is more suitable for visually impaired people, reduces the degree of dependence on vision when focusing is performed, and provides clear imaging more easily and stably.
Disclosure of Invention
It is an object of the present application to provide an information processing method and system for adjusting focus under enlarged display, which at least partially solves the above-mentioned problems in the prior art.
According to an aspect of the present application, there is provided a method of adjusting a focus area using a near-eye display device, wherein the near-eye display device includes a camera and a display screen, the method including:
s1: receiving an input for adjusting the magnification;
s2: according to the magnification factor, calculating a focusing area corresponding to the magnification factor in the shooting range of the camera;
s3: focusing the focusing area;
s4: acquiring an environment image through the camera; and
s5: and amplifying the part of the environment image corresponding to the focusing area and displaying the part of the environment image through the display screen.
According to an aspect of the application, the camera is a zoom camera, and the step S4 is implemented by the zoom camera.
According to an aspect of the present application, the step S2 includes: and calculating a focusing area corresponding to the magnification factor in the shooting range of the camera through a preset model, wherein the model establishes association between the magnification factor and the focusing area.
According to one aspect of the application, in the model, a linear correlation is established between the magnification and the focusing area.
According to an aspect of the present application, the step S3 includes: focusing the focusing area without changing the focal length of the camera.
According to an aspect of the present application, the step S5 includes: and amplifying the part of the environment image corresponding to the focusing area to the size of the display screen, and displaying the part of the environment image through the display screen.
The present application further relates to a near-eye display device comprising:
a camera configured to capture an image of an environment surrounding a wearer;
an operation unit configured to receive an input for adjusting a magnification;
a display screen configured to display an image; and
and the processing unit is coupled with the camera, the operation unit and the display screen, and is configured to calculate a focusing area corresponding to the magnification factor in the shooting range of the camera according to the magnification factor, focus the focusing area, control the camera to acquire an environment image, amplify a part corresponding to the focusing area in the environment image and display the part through the display screen.
According to one aspect of the application, the processing unit is configured to calculate a focusing area corresponding to the magnification in the shooting range of the camera through a preset model, wherein the model establishes a correlation between the magnification and the focusing area.
According to one aspect of the application, in the model, a linear correlation is established between the magnification and the focusing area.
According to one aspect of the application, the near-eye display device is a head-mounted electronic assistor.
According to one aspect of the application, the processing unit is configured to: focusing the focusing area without changing the focal length of the camera.
According to the information processing method and the system for adjusting focusing under the magnification display, the model which is preset to establish the association between the magnification factor and the focusing area can be used for generating the display picture subjected to the optimized focusing adjustment in time under any feasible magnification factor, so that the dependence degree on vision when focusing is carried out is remarkably reduced, and clear imaging can be simply, conveniently and stably provided. In addition, the electronic device based on the information processing method and system is particularly suitable for providing convenience and help for the visually impaired.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, with reference to the accompanying drawings in which:
FIG. 1 illustrates a method of adjusting a focus area using a near-eye display device according to one embodiment of the invention;
FIGS. 2A and 2B show schematic diagrams illustrating magnification versus focus area according to one embodiment of the present application; and
FIG. 3 illustrates the structure of an electronic viewing aid according to one embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following examples and accompanying drawings. It is to be understood that the following specific examples are illustrative of the invention and are not to be construed as limiting the invention. Also, for convenience of description, only portions related to the present invention are shown in the drawings. It should be noted here that the various embodiments and features thereof in the present application may be combined with each other without conflict.
As described above, with the rapid development of industries such as chip, artificial intelligence, and display technology, people are increasingly using electronic devices such as mobile phones, digital cameras, AR/VR glasses, etc. for imaging. In the imaging process, focusing is a necessary link, and a clearer image is obtained by fine adjustment of the relative position of the lens. As a further improvement of the focusing effect, the related art mostly focuses on making a certain area in the current view clearer relative to the entire view, for example, by actively selecting the local area. This is advantageous in that the focus position can be adjusted according to personal preferences and can be achieved in various ways, such as a cell phone equipped with a touch-sensitive display module or smart glasses equipped with an eye tracking module, etc. However, the above techniques do not sufficiently consider the use requirements of visually impaired people. For visually impaired people, careful comparison and identification in the field of view is often difficult (at least not frequently applied). Therefore, achieving an improvement in focusing effect as a "passive skill" in the electronic device in which it is used may be a more reasonable and practical way for visually impaired people than the aforementioned "active selection" approach. In addition, the electronic visual aid is an electronic device for improving the visual ability of a person with visual impairment, and can make the most use of limited vision. Electronic viewing aids typically allow the image displayed in its viewing window to be enlarged, thereby improving the legibility of the displayed image. However, the related electronic visual aid cannot properly perform focus adjustment after enlarging a display image, resulting in deterioration of the sharpness of the enlarged image. In view of this, the present application proposes an information processing method and system for adjusting focusing under enlarged display, which, by being incorporated into an electronic device such as an electronic visual aid, causes the electronic device to perform focusing adjustment optimized in advance while performing enlarged display, and thus the present invention has been completed.
In the present application, unless otherwise specified, the expression "plurality" usually means two or more, i.e., a number ≧ 2.
A first aspect of the present application relates to a method 10 of adjusting a focus area with a near-eye display device. The method may be implemented by a common near-eye display device, for example, comprising a camera and a display screen, wherein the camera may be used to capture an image of the environment around the wearer of the near-eye display device, and the captured image may be processed (or unprocessed) for display by the display screen for viewing by the wearer. The method 10 is described in detail below with reference to the drawings.
At step S1: an input is received that adjusts the magnification.
The near-eye display device is provided with a user interface for receiving user input regarding magnification, for example. The magnification factor may be input through the user interface when the wearer of the near-eye display device cannot see clearly the image of a certain area on the display screen for visual reasons. The user interface may be of the push button, knob type or may also include a voice recognition module to determine the magnification, for example by recognizing a user's voice command. The magnification may be an integer, such as 1, 2, 3 or more, or a non-integer, such as 1.5, etc. These are all within the scope of the present invention.
It will be appreciated by those skilled in the art that the user interface may be implemented in any known and common form, for example, by being provided in a controller connected to the product body in a wired or wireless manner, or directly in the product body, etc., and the present application is not particularly limited thereto.
At step S2: and calculating a focusing area corresponding to the magnification factor in the shooting range of the camera according to the magnification factor.
In one embodiment of the present application, step S2 is implemented by presetting a model that has established an association between a magnification and a focused region in the environment image. With the model, in the case where a numerical value of a certain magnification is input, an array defining the boundary of the in-focus area is output accordingly.
The term "in-focus area" in this application is to be understood as an area of the ambient image (the actual scene) in the visible area. The visible area or shooting range that the camera can observe is certain under the current focal length, and can be represented by a horizontal angle and a vertical angle. It will be appreciated by those skilled in the art that in the case where the size of the photographing range or the visible region is fixed, the sizes of the portions thereof displayed at different magnifications are different for the same environment image. According to the present application, the magnification of the display screen is changed while the user continuously adjusts the magnification (i.e., step S1); while the size of the viewable area of the user (i.e., the size of the display screen) is generally constant. In this way, in the enlarged display, the in-focus region can be determined from the partial picture in the visible region, not the entire picture. In other words, by determining the size of the focusing area corresponding to the environment image under each magnification and introducing the corresponding relation into the product, the user can see the imaging with optimized focusing adjustment in time when watching any part of the magnified picture under any magnification. Therefore, in step S2, the magnification generated in step S1 is converted into the in-focus region corresponding to the magnification.
According to one embodiment of the invention, a focusing area corresponding to the magnification factor in the shooting range of the camera can be calculated through a preset model, wherein the model establishes association between the magnification factor and the focusing area. In one embodiment of the present application, a linear correlation is constructed between the magnification and the in-focus region in the model. The process of establishing a linear correlation between the magnification and the in-focus region in the model according to the present application is described below by way of a specific example.
Fig. 2A and 2B show schematic diagrams of magnification versus in-focus area according to an embodiment of the present application.
Fig. 2A shows the in-focus area F in the ambient image E at 1 × magnification. As shown in fig. 2A, the viewing area V of the display screen has a width w and a height h. The environmental image E captured by the camera is scaled to the same size as the visible area V at 1 x magnification, so its width is also w and its height is also h. At the magnification of 1, since the environment image E is just completely displayed in the visible region V, the in-focus region F corresponds to the entire environment image E, and has a width w and a height h.
Fig. 2B shows the in-focus region F' in the ambient image E at S times (S >1) magnification. As shown in fig. 2B, the width of the visible region V and the ambient image E is still w, and the height is still h. However, at the magnification of S, since the environment image E can be displayed only partially in the visible region V after being enlarged, the in-focus region F ' corresponds to the portion of the environment image E that is displayed, and has a width w ' ═ w/S and a height h ' ═ h/S.
From this, it is possible to calculate the boundary coordinates of the focusing area (defined by the vertexes A, B, C and D) that should be defined when the environment image E is enlarged to S times. Wherein, assuming that the vertex of the lower left corner of the environment image E is the origin of coordinates (0,0), then:
xAsubstituting w '═ w/S for (w-w')/2 to obtain xA=0.5w·(1-1/S);
yASubstituting h '═ h/S for (h-h')/2 to obtain yA=0.5h·(1-1/S);
xB=w-[(w-w')/2]Substituting w' ═ w/S to obtain xB=0.5w·(1+1/S);
yB=yA=0.5h·(1-1/S);
xC=xB=0.5w·(1+1/S);
yC=h-[(h-h')/2]Substituting h' for h/S to obtain yC=0.5h·(1+1/S);
xD=xA=0.5w·(1-1/S);
yD=yC=0.5h·(1+1/S);
The point coordinates ((xA, yA), (xB, yB), (xC, yC) and (xD, yD)) defining the boundaries of the focusing area are thus set as a function of the magnification S, so that by means of the model according to the application, on the one hand the numerical value of the magnification is input and on the other hand an array defining the focusing area is output.
In the embodiment of fig. 2A and 2B, the focus area F' is located in the center of the ambient image E. The present invention is not limited thereto, and the focusing area F' may be located in other positions of the environment image E. Accordingly, the method 10 may further include: an input regarding a position of a focus area is received. The position of the specific focus area is specified by the wearer.
At step S3: and focusing the focusing area.
According to the present application, for step S3, in the case that the focusing area has been determined, any known image sharpness evaluation algorithm may be employed to complete the focusing process. Taking a common gray gradient algorithm as an example, firstly calculating a Laplacian operator in an image designated area, wherein the numerical values of the operators obtained under different focal lengths are different; when the numerical value of the operator reaches the maximum value in the adjustable focal length range, the focal length in the current state is set to enable the designated area of the image to be seen most clearly. It should be noted that the above description of the algorithm is only exemplary and not limiting, and those skilled in the art can use any suitable image sharpness evaluation algorithm to implement the focusing process, and the present application is not limited thereto.
At step S4: and acquiring an environment image through the camera.
After the focusing area is focused in step S3, an environment image is captured by the camera, and at this time, objects, characters, and the like in the focusing area are clearer than those before focusing.
At step S5: and amplifying the part of the environment image corresponding to the focusing area and displaying the part of the environment image through the display screen.
In step S5, an image corresponding to the focus area F' in the environment image E may be cut out, and the image of the portion may be enlarged and displayed on the display screen. Preferably, the step S5 includes: and amplifying the part of the environment image corresponding to the focusing area to the size of the display screen, and displaying the part of the environment image through the display screen. In this way, the sharply focused partial image is displayed on the display screen in an enlarged manner, and the wearer of the near-eye display device will be able to obtain a better visual experience.
According to the application, in one embodiment of the application, the camera is a variable focus camera, the focal length of which is adjustable, so as to change the shooting range of the camera, wherein the variable focus camera collects environment images at different focal lengths and transmits the data of the collected environment images to a display connected with the variable focus camera in real time; the display displays pictures of the environmental images acquired at different focal lengths. According to an embodiment of the present application, in the step S3, the focusing area is focused without changing the focal length of the camera.
According to the present application, the display screen may be a touch-sensitive display or a non-touch-sensitive display, which is not particularly limited in the present application. In a preferred embodiment of the present application, the display screen is configured for near-eye display, and may be applied, for example, in an electronic viewing aid.
Fig. 3 shows a near-eye display device 100 according to a second aspect of the present application, for example an electronic visual aid. The following detailed description refers to the accompanying drawings.
The near-eye display device 100 includes a camera 1, an operation unit 2, a display screen 5, and a processing unit 3. The camera 1 is, for example, a zoom camera and configured to capture an environment image around a wearer, the operation unit 2 is configured to receive an input for adjusting a magnification factor, the display screen 5 is configured to display an image, the processing unit 3 is coupled to the camera 1, the operation unit 2 and the display screen 5 and configured to calculate a focusing area corresponding to the magnification factor in a shooting range of the camera according to the magnification factor, focus the focusing area, control the camera 1 to capture the environment image, and amplify a portion of the environment image corresponding to the focusing area and display the portion of the environment image through the display screen 5.
The operation unit 2 can receive an input of a magnification factor, and can be operated by a user to input a signal for changing the focal length of the zoom camera 1. The processing unit 3 is connected to the zoom camera 1 and the operation unit 2, changes the focal length/determines the focusing region of the zoom camera 1 according to the operation signal/magnification input from the operation unit 2, and receives and outputs the environment image data from the zoom camera 1. Preferably, the processing unit 3 is further preset with a function model, which determines an array of focusing areas according to the numerical value of the magnification factor input by the operation unit 2, and controls the camera 1 to focus in the focusing areas according to the array of focusing areas. After focusing is completed, the camera 1 collects an environment image, and the processing unit 3 displays an image corresponding to the focusing area in the environment image on the display screen 5 after amplifying the image, preferably displays the image on the display screen 5 in a full screen manner.
According to one embodiment of the present invention, the processing unit 3 is configured to calculate a focusing area corresponding to the magnification in the shooting range of the camera 1 through a preset model, wherein the model establishes a correlation between the magnification and the focusing area.
According to one embodiment of the invention, a linear relationship is constructed between the magnification and the in-focus region in the model.
According to one embodiment of the invention, the near-eye display device 100 is a head-mounted electronic typoscope.
According to one embodiment of the invention, the processing unit 3 is configured to: focusing the focusing area without changing the focal length of the camera 1.
In one embodiment of the application, the processing unit 3 is integrated in a chip.
According to the present application, there is no limitation in principle on the kind of electronic device and product form, etc., involved, as long as it includes the information processing method and system for adjusting focus under enlarged display according to the present application. In one embodiment of the present application, the electronic device is an electronic visual aid, preferably a head-mounted electronic visual aid in the form of a helmet or glasses, for example.
The following describes a method for adjusting a focus area using a near-eye display device and an operation process of the near-eye display device according to the present application in more detail through embodiments.
A visually impaired person is prepared to use the electronic viewing aid 100 to view a small pattern in front of it. Referring to fig. 2A, the zoom camera 1 first collects an environment image containing the small pattern, and transmits data to the processing unit 3, the processing unit 3 transmits the environment image to the display screen 5 for display, and the magnification at this time is the default value 1, so that the display screen 5 displays the whole environment image.
At this time, the user finds that he cannot see clearly the small pattern in the environment image at the magnification of 1. For this purpose, the user operates the operation unit 2 to input a magnification of 3 times; the processing unit 3 determines a focusing area corresponding to the magnification factor in the shooting range of the camera 1 according to the magnification factor of 3 times, performs focusing processing on the environment image according to the array of the focusing areas, acquires the environment image through the camera 1, sends a part corresponding to the focusing area in the environment image to the display screen 5, amplifies the environment image and displays the environment image on the display screen 5, and preferably displays the environment image on the display screen 5 in a full screen mode.
However, the user finds that at 3 times magnification, although the small pattern in the environment image is sufficiently clear, the entire outline thereof cannot be viewed because the magnification is too large. At this time, the user operates the operation unit 2 again to input a magnification of 2 times. The above process is repeated, and finally, the environment image picture which contains the small pattern and is focused under the magnification of 2 times is displayed on the display screen 5. To this end, the definition and size of the small pattern displayed by the electronic viewing aid 100 can meet the needs of the user.
As can be seen from the process described in the above embodiment, the electronic viewing aid according to the present application is preset with a model for establishing a relationship between the magnification of the environment image and the focusing area at the magnification, so that a display image with optimized focusing adjustment can be generated in time at any magnification. Thus, when using the electronic typoscope according to the present application, the user only needs to concentrate on adjusting the magnification according to the clarity of the target object without additional consideration of focusing, thereby achieving a "what you see is what you get" viewing effect. Therefore, the method and the device greatly reduce the degree of dependence on vision when focusing is performed, can simply, conveniently and stably provide clear imaging, and are particularly suitable for providing convenience and help for people with vision disorder.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be understood by those skilled in the art that the scope of the present invention in the present application is not limited to the specific combination of the above-mentioned features, but also covers other embodiments formed by arbitrary combinations of the above-mentioned features or their equivalents, for example, the above-mentioned features are replaced with (but not limited to) features having similar functions disclosed in the present application, without departing from the inventive concept.

Claims (11)

1. A method of adjusting a focus area with a near-eye display device, wherein the near-eye display device comprises a camera and a display screen, the method comprising:
s1: receiving an input for adjusting the magnification;
s2: according to the magnification factor, calculating a focusing area corresponding to the magnification factor in the shooting range of the camera;
s3: focusing the focusing area;
s4: acquiring an environment image through the camera; and
s5: and amplifying the part of the environment image corresponding to the focusing area and displaying the part of the environment image through the display screen.
2. The method of claim 1, wherein the camera is a variable focus camera, the step S4 being performed by the variable focus camera.
3. The method of claim 1, wherein the step S2 includes: and calculating a focusing area corresponding to the magnification factor in the shooting range of the camera through a preset model, wherein the model establishes association between the magnification factor and the focusing area.
4. The method of claim 3, wherein in the model, a linear correlation is constructed between magnification and in-focus region.
5. The method according to any one of claims 1-4, wherein the step S3 includes: focusing the focusing area without changing the focal length of the camera.
6. The method according to any one of claims 1-4, wherein the step S5 includes: and amplifying the part of the environment image corresponding to the focusing area to the size of the display screen, and displaying the part of the environment image through the display screen.
7. A near-eye display device comprising:
a camera configured to capture an image of an environment surrounding a wearer;
an operation unit configured to receive an input for adjusting a magnification;
a display screen configured to display an image; and
and the processing unit is coupled with the camera, the operation unit and the display screen, and is configured to calculate a focusing area corresponding to the magnification factor in the shooting range of the camera according to the magnification factor, focus the focusing area, control the camera to acquire an environment image, amplify a part corresponding to the focusing area in the environment image and display the part through the display screen.
8. The near-eye display device of claim 7, wherein the processing unit is configured to calculate a focus area corresponding to the magnification in a shooting range of the camera through a preset model, wherein the model establishes an association between the magnification and the focus area.
9. The near-eye display device of claim 8, wherein in the model, a linear correlation is constructed between magnification and focus area.
10. The near-eye display device of any one of claims 7-9, wherein the near-eye display device is a head-mounted electronic assistor.
11. The near-eye display device of any one of claims 7-9, wherein the processing unit is configured to: focusing the focusing area without changing the focal length of the camera.
CN202011578670.3A 2020-12-28 2020-12-28 Method for adjusting focusing area by using near-eye display device and near-eye display device Pending CN114697513A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011578670.3A CN114697513A (en) 2020-12-28 2020-12-28 Method for adjusting focusing area by using near-eye display device and near-eye display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011578670.3A CN114697513A (en) 2020-12-28 2020-12-28 Method for adjusting focusing area by using near-eye display device and near-eye display device

Publications (1)

Publication Number Publication Date
CN114697513A true CN114697513A (en) 2022-07-01

Family

ID=82130598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011578670.3A Pending CN114697513A (en) 2020-12-28 2020-12-28 Method for adjusting focusing area by using near-eye display device and near-eye display device

Country Status (1)

Country Link
CN (1) CN114697513A (en)

Similar Documents

Publication Publication Date Title
US10146304B2 (en) Methods and apparatus for vision enhancement
JPWO2008012905A1 (en) Authentication apparatus and authentication image display method
KR101811717B1 (en) Zoom control method and apparatus, and digital photographing apparatus
KR101899877B1 (en) Apparatus and method for improving quality of enlarged image
CN109906599B (en) Terminal photographing method and terminal
KR20190021138A (en) Electronic device which stores depth information associating with image in accordance with Property of depth information acquired using image and the controlling method thereof
US20070098396A1 (en) Electronic camera
EP1519560A2 (en) Image sensing apparatus and its control method
JP6436783B2 (en) Image processing apparatus, imaging apparatus, image processing method, program, and storage medium
KR20150113889A (en) Image processing apparatus, image processing method, and storage medium
CN102404494B (en) Electronic equipment and method for acquiring image in determined area
KR20190012465A (en) Electronic device for acquiring image using plurality of cameras and method for processing image using the same
JPH08317429A (en) Stereoscopic electronic zoom device and stereoscopic picture quality controller
CN106131418A (en) A kind of composition control method, device and photographing device
CN107800951B (en) Electronic device and lens switching method thereof
EP4287610A1 (en) Focusing method and apparatus, electronic device, and medium
CN108259838B (en) Electronic vision aid and image browsing method for same
US11144119B2 (en) Methods and systems for generating a magnification region in output video images
EP4093015A1 (en) Photographing method and apparatus, storage medium, and electronic device
CN111818304A (en) Image fusion method and device
JPWO2014148031A1 (en) Image generating apparatus, imaging apparatus, and image generating method
CN111064895A (en) Virtual shooting method and electronic equipment
JP2018152787A (en) Imaging device, external device, imaging system, imaging method, operation method, and program
CN106803920B (en) Image processing method and device and intelligent conference terminal
CN112839166B (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination