CN108259838B - Electronic vision aid and image browsing method for same - Google Patents

Electronic vision aid and image browsing method for same Download PDF

Info

Publication number
CN108259838B
CN108259838B CN201810222641.XA CN201810222641A CN108259838B CN 108259838 B CN108259838 B CN 108259838B CN 201810222641 A CN201810222641 A CN 201810222641A CN 108259838 B CN108259838 B CN 108259838B
Authority
CN
China
Prior art keywords
image
window
data
user
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810222641.XA
Other languages
Chinese (zh)
Other versions
CN108259838A (en
Inventor
郑雅羽
龚泽挚
陈陇敏
应翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dukang Technology Co ltd
Original Assignee
Hangzhou Dukang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dukang Technology Co ltd filed Critical Hangzhou Dukang Technology Co ltd
Priority to CN201810222641.XA priority Critical patent/CN108259838B/en
Publication of CN108259838A publication Critical patent/CN108259838A/en
Application granted granted Critical
Publication of CN108259838B publication Critical patent/CN108259838B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

Disclosed are an electronic vision aid and an image browsing method for the same, the image browsing method comprising: capturing an image of a photographed object through a lens, and focusing and projecting the captured image to a sensing area of an image sensor; the method comprises the steps that an image sensor senses an image of a shot object captured by a lens, and an optical image is converted into acquisition output data; defining a first window in the acquired output data based on the magnification, and reading out pixel data in the first window to generate to-be-displayed processed data; performing image amplification processing on the processing data to be displayed to acquire display data; displaying the display data through an output display module; and judging whether a visual field movement request from a user is received, and if so, adjusting the position of the first window according to the received visual field movement request.

Description

Electronic vision aid and image browsing method for same
Technical Field
The invention is suitable for the field of real-time video acquisition, and particularly relates to an electronic vision aid and an image browsing method for the electronic vision aid.
Background
The electronic vision aid is an electronic video device, is a high-tech product in the vision aid industry, has a maximum magnification factor of 50 times, and is very suitable for moderate and severe low vision patients. The electronic vision aid has the functions of magnification adjustment, focal length adjustment, brightness and contrast adjustment and the like, and the main types are as follows: pocket (hand-held) electronic viewers, CCTV (Closed-Circuit Television) viewers fixed on a table top, portable viewers with rotatable lenses and capable of being output to a liquid crystal display, and the like. Electronic vision aids are devices that effectively improve the vision of low vision patients, who often focus and enlarge objects to be viewed by using the aid, so that the details of the objects can be better viewed.
The work flow of the general electronic vision aid is that firstly, a camera is used for collecting images, then the collected images are transmitted to a memory chip, and finally, the processed images are output to a display screen through image processing methods such as amplification, color change, contrast enhancement and the like. The low vision person can display more details on the screen of the vision aid through operations such as zooming images, color changing and the like, so that the interested part of the person can be read in detail.
In the conventional electronic vision aid product, in the step of transmitting an image acquired by an image sensor to a memory, a single ROI mode (window mode) or a single BIN mode (pixel merging mode) is commonly adopted in a windowing method of the image sensor. In imaging applications, the ROI mode is to define one or more windows of interest within the resolution of the image sensor, read out only the image information within these windows, then transfer it to memory, and finally perform subsequent processing. Setting a smaller ROI area can reduce the amount of image information that the image sensor transmits and the processor needs to process, and increase the acquisition data rate of the camera. This technique is certainly very useful because industrial cameras typically provide a 4:3 resolution window, however in practical detection, often only a portion of the longitudinal resolution is used, if the image sensor is ROI-enabled, we can remove the data parts that are not of interest to themselves, thus greatly improving the efficiency of transmission and processing and facilitating the flexible application and post-processing of images by the user. However, the ROI mode has its own limitations in that since this mode is a small portion of the original image taken directly, the field of view is correspondingly reduced, whereas with a larger magnification the field of view is very small, which may result in an unclear image being seen.
The adjacent elements in an image have great correlation, and the BIN mode is to downsample the adjacent pixels based on the correlation and synthesize the downsampled adjacent pixels into a pixel for outputting, so that the transmitted data volume can be effectively reduced, and the sensitivity and the transmission rate are improved. The BIN mode, although reducing resolution, still ensures higher definition of the image over a range of magnification, and only if the image reaches a greater magnification will it begin to blur.
Under the condition of different magnification factors, the quality of the image definition and the size of the observation visual field range are important indexes for measuring a vision aid product. Because of the natural defect in vision of the low vision population, the vision aid must be used to obtain the information of interest through image magnification. The field of view becomes smaller after the single ROI mode is enlarged, so that the user needs to move the vision aid continuously to obtain enough effective information. The single BIN mode can be amplified by a certain multiple to meet the requirement of the visual field range, but the single BIN mode becomes fuzzy and cannot meet the requirement of definition. Therefore, the use of these two single modes cannot meet the demands of the low vision population, and there is a need to provide a new method that can obtain a larger visual field range under a larger magnification and effectively ensure the definition.
Disclosure of Invention
The invention aims to provide an electronic vision aid and an image browsing method for the electronic vision aid, so that a low-vision crowd can still obtain a larger visual field range and good image definition even when the electronic vision aid is used under high magnification.
An embodiment of the present invention provides an electronic vision aid, including: the image acquisition module comprises a lens and an image sensor, wherein the lens is used for capturing an image of a shot object, focusing and projecting the captured image to a sensing area of the image sensor, and the image sensor senses the image of the shot object captured by the lens and converts the optical image into acquisition output data; the browsing control module is used for receiving an image scaling request and a visual field movement request from a user; the image processing module is coupled to the image acquisition module and the browsing control module, receives the acquired output data, defines a first window in the acquired output data based on the amplification factor, reads out pixel data in the first window to generate to-be-displayed processing data, and then performs image amplification processing on the to-be-displayed processing data to obtain display data; the output display module receives the display data from the image processing module and displays the display data in real time; the image processing module can change the magnification and adjust the size of the first window according to the image scaling request from the user, and can also adjust the position of the first window according to the visual field movement request from the user.
The embodiment of the invention also provides an image browsing method for the electronic vision aid, which comprises the following steps: capturing an image of a photographed object through a lens, and focusing and projecting the captured image to a sensing area of an image sensor; the method comprises the steps that an image sensor senses an image of a shot object captured by a lens, and an optical image is converted into acquisition output data; defining a first window in the acquired output data based on the magnification, and reading out pixel data in the first window to generate to-be-displayed processed data; performing image amplification processing on the processing data to be displayed to acquire display data; displaying the display data in real time through an output display module; judging whether a field of view movement request from a user is received; and if a visual field movement request is received from a user, adjusting the position of the first window according to the visual field movement request.
The embodiment of the invention further provides an electronic vision aid, which comprises: the image acquisition module comprises a lens and an image sensor, wherein the lens is used for capturing an image of a shot object, focusing and projecting the captured image to a sensing area of the image sensor, and the image sensor senses the image of the shot object captured by the lens and converts the optical image into acquisition output data; the browsing control module receives a field of view movement request from a user; the image processing module is coupled to the image acquisition module, receives the acquired output data, defines a first window in the acquired output data based on the magnification, reads out pixel data in the first window to generate to-be-displayed processing data, and then performs image amplification processing on the to-be-displayed processing data to obtain display data; the output display module receives the display data from the image processing module and displays the display data in real time; the image sensor works in a windowing read-out mode, a second window is defined in the sensed image, and pixel data in the second window are read out to generate acquisition output data; the image processing module may adjust the position of the first window in response to a field of view movement request from a user, wherein the image sensor adjusts the position of the second window if the first window has reached a boundary where output data is collected when the field of view movement request is received.
Drawings
The invention will be further described in conjunction with the accompanying drawings, all of which are for purposes of illustration and not limitation. Furthermore, they may only show a part of the system.
FIG. 1 is a block diagram of an electronic viewfinder according to an embodiment of the present invention;
FIG. 2 is a schematic diagram showing the comparison of effects of the image sensor in different modes;
FIG. 3 is a schematic diagram illustrating an internal flow of display pixel information in the electronic viewfinder of FIG. 1 according to an embodiment of the present invention;
FIG. 4 is a flowchart of steps of an image browsing method for an electronic viewfinder according to an embodiment of the present invention;
FIGS. 5A-5D are schematic diagrams showing a window position according to a user field of view movement request in an image browsing method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an image processing module according to an embodiment of the present invention.
Detailed Description
Specific embodiments of the invention will be described in detail below, it being noted that the embodiments described herein are for illustration only and are not intended to limit the invention. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice the present invention. In other instances, well-known modules, circuits, materials, or methods have not been described in detail in order not to obscure the present invention.
Throughout the specification, references to "one embodiment," "an embodiment," "one example," or "an example" mean: a particular feature, structure, or characteristic described in connection with the embodiment or example is included within at least one embodiment of the invention. Thus, the appearances of the phrases "in one embodiment," "in an embodiment," "one example," or "an example" in various places throughout this specification are not necessarily all referring to the same embodiment or example. Furthermore, the particular features, structures, or characteristics may be combined in any suitable combination and/or sub-combination in one or more embodiments or examples. It will be understood by those of ordinary skill in the art that the term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Fig. 1 is a schematic structural diagram of an electronic vision aid according to an embodiment of the present invention, which includes an image acquisition module 1, an image processing module 2, an output display module 3, and a browsing control module 4. The electronic vision aid performs image acquisition through the image acquisition module 1. The browsing control module 4 receives inputs from a user, such as an image scaling request, a field of view movement request, etc., and controls the image processing module 2 to perform operations of color change, magnification, field of view area movement, etc., of the image according to these user inputs. Then, the output display module 3 displays the processed image.
The image acquisition module 1 includes a lens 101 and an image sensor 102. The lens 101 is an optical component that generates an image, and is typically composed of a plurality of lenses, such as a plastic lens or a glass lens. The lens 101 is used to capture an image of a subject, and focus-projects the captured image to a sensing area of the image sensor 102. The quality of the lens can directly influence the color reproducibility and layering of the acquired image.
The image sensor 102 is an element that converts an optical image into an electronic signal, and is widely used in digital cameras and other electronic optical devices. It senses the image of the photographed object captured by the lens 101, and converts the optical image into acquisition output data. The image sensor 102 generally includes a plurality of pixel units distributed in an array in the effective sensing area, and the more pixel units it includes, the higher the resolution of the image. Currently, there are two types of image sensors, CCD (charge coupled device ) and CMOS (complementary metal oxide semiconductor, complementary metal oxide semiconductor). In one embodiment, the image sensor 102 is a CMOS image sensor. The CMOS image sensor is generally composed of an image sensitive cell array, a row driver, a column driver, time sequence control logic, an AD converter, a data bus output interface, a control interface and the like, and the working process of the CMOS image sensor can be generally divided into a reset part, a photoelectric conversion part, an integration part and a readout part. For the sake of brevity, the specific working principle thereof is not described in detail herein.
The main function of the browsing control module 4 is to control the image zooming, the field of view shifting and other processes when the user browses the image. The visually impaired person may vary in perception of color and in some embodiments the viewing control module 4 may also provide color change control so that visually impaired persons may manipulate the image processing module 2 through the module to adjust the contrast of the image to select the color that is perceived as most comfortable by themselves for viewing. The navigation control module 4 may employ key inputs (e.g., including zoom keys, field of view shift keys, color change keys) or may be implemented by other user input techniques, such as a touch screen.
The image processing module 2 is coupled to the image acquisition module 1 and the browsing control module 4, receives the acquired output data, and performs image processing on the acquired output data according to a user request (e.g. zoom request, field of view movement request, color change request) from the browsing control module 4, so as to obtain display data. The output display module 3 is typically a display screen of LCD or other material, receives display data from the image processing module 2, and displays the display data in real time. In some embodiments, the output display module 3 may also include an HDMI module connected to an external display device.
The basic principle of image scaling is to define a window of suitable size in the acquired output data according to the magnification Mag. The smaller the magnification, the larger the window, and the wider the range of the actual physical area displayed on the output display module 3; the larger the magnification, the smaller this window, and the smaller the range of the actual physical area displayed on the output display module 3. Then, the pixel data in this window is fetched, scaled by interpolation (e.g., nearest neighbor, bilinear, tertiary interpolation, etc.), and display data is generated and displayed on the output display module 3 in real time. For example, if the display resolution of the output display module 3 is 1280×720, the resolution of the window C defined in the collected output data is (1280/Mag) (720/Mag), and the window C is a rectangular area with a length of 1280/Mag pixels and a width of 720/Mag pixels, wherein the focal position (i, j) is the top left corner vertex in the collected output data.
The image processing module 2 may adjust the magnification Mag and the size of the aforementioned window according to a zoom request from the user. Meanwhile, according to a visual field movement request from a user, the image processing module 2 can also adjust the position of the window to realize visual field movement of the electronic vision aid under the condition of image amplification, so that the user can obtain more effective information without moving the vision aid.
In the embodiment shown in fig. 1, in addition to receiving acquisition output data from the image sensor 102, the image processing module 2 may also control the operation mode and operation parameters of the image sensor 102 according to a user request from the browsing control module 4. When the magnification Mag is smaller than the preset value T, the image sensor 102 operates in a pixel merging mode under the control of the image processing module 2, and synthesizes adjacent pixels in the image captured by the lens 101 and projected to the sensing area into one pixel to generate the collected output data. When the magnification Mag is greater than the preset value, the image sensor 102 operates in the ROI mode under the control of the image processing module 2, a window of interest is defined in the image captured by the lens 101 and projected to the sensing region, and pixel data in the window is read out to generate acquisition output data.
Fig. 2 is a schematic diagram showing the comparison of effects of the image sensor in different modes. From the visual point of view, in the BIN mode, the pixels for collecting output data can represent the whole image content captured by the lens. In the ROI mode, however, the pixels in the acquired output data may represent only the portion of the image content captured by the lens.
Fig. 3 is a schematic flow diagram illustrating an internal flow of display pixel information in the electronic viewfinder of fig. 1 according to an embodiment of the present invention. In BIN mode, the image sensor 102 defines a window a (e.g. with a resolution of 3200 x 1800) in the image captured by the lens 101 (e.g. with a resolution of 4208 x 3120), and synthesizes adjacent pixels in the window a into one pixel to generate the acquired output data (e.g. with a resolution of 1600 x 900). Because the electronic vision aid is mainly used in a short distance, the phenomena of over darkness, barrel distortion and the like are often easily generated at the edge of the original image captured by the lens under the condition, and the edge vision is actively abandoned by taking a relatively proper center range (namely a window A) as the whole vision field in the original image, so that the adverse effect of the phenomena on user experience can be effectively prevented.
In the ROI mode, the image sensor 102 defines a window B (the same resolution as the acquired output data, e.g., 1600×900) in the original image, and reads out the pixel data in the window to generate the acquired output data.
Then, the image processing module 2 defines a window C in the acquired output data according to the magnification Mag, reads out pixel data in the window C as processing data to be displayed, and then performs processing such as magnification and/or color change and the like on the pixel data to be displayed, thereby generating display data for final display.
The user can control the data information such as the magnification Mag, the focal position (a, B) of the window B, the focal position (i, j) of the window C, the color change parameters and the like through the browse control module 4, so that the purposes of image scaling, visual field movement and image color change are achieved.
Fig. 4 is a flowchart illustrating steps of an image browsing method for an electronic viewfinder according to an embodiment of the present invention.
Step 610: the output display module displays an image. The image is determined by display data, which may be generated by: capturing an image of a photographed object through a lens, and focusing and projecting the captured image to a sensing area of an image sensor; the method comprises the steps that an image sensor senses an image of a shot object captured by a lens, and an optical image is converted into acquisition output data; defining a first window in the acquired output data based on the magnification, and reading out pixel data in the first window to generate to-be-displayed processed data; image processing (e.g., magnification, color change, etc.) is performed on the processed data to be displayed to acquire display data.
Step 620: a key request from a user is monitored.
Step 630: whether a zoom request exists is determined, if yes, step 640 is executed, and if not, step 700 is executed.
Step 640: the magnification Mag is calculated from the zoom request.
Step 650: judging whether the magnification Mag is larger than a preset value T, if so, executing step 660, otherwise, executing step 680.
Step 660: and judging whether the windowing mode of the image sensor is an ROI mode, if so, returning to the step 610. If not, it means that the image sensor is operating in BIN mode, and step 670 is performed.
Step 670: the windowing mode of the image sensor is switched from BIN mode to ROI mode.
Step 680: execution to step 680 indicates Mag < T, at which point it is determined whether the windowed mode of the image sensor is BIN mode. If yes, return to step 610; if not, it means that the image sensor is operating in the ROI mode at this time, and step 690 is required to switch the image sensor from the ROI mode to the BIN mode.
Step 690: the windowing mode of the image sensor is switched from the ROI mode to the BIN mode.
Step 700: it is determined whether there is a field of view movement request. If yes, go to step 710. If not, then step 760 is performed.
Step 710: it is determined whether the window C has reached the boundary where the output data is collected. If yes, go to step 730, if no, go to step 720. As will be appreciated by those skilled in the art, the boundaries of the acquired output data include an upper boundary, a lower boundary, a left boundary and a right boundary, and determining whether the window C reaches the boundary of the acquired output data requires determining whether the window C reaches the boundary of the corresponding direction according to the direction (up/down/left/right) in which the field of view moves. For example, if the visual field moving direction is up, it is determined whether or not the window C has reached the upper boundary of the acquired output data.
Step 720: execution of step 720 indicates that window C does not exceed the boundary of the acquired output data, then the focus position (i, j) is recalculated based on the field of view movement request, and window C is retrieved in window B based on the new focus position, and then step 610 is returned.
Step 730: and judging whether the current camera windowing mode is an ROI mode, if so, executing step 740. If not, the current window opening mode of the image sensor is BIN mode, and the step 610 is directly returned, and the electronic vision aid can sound a bell of 'beep', so that the visual field range reaches the boundary.
Step 740: it is further determined whether window B has reached the window a boundary. If so, the process returns to step 610, and the electronic watch may sound a "ticker" indicating that the field of view has reached the boundary. If not, go to step 750. Similar to step 710, it is determined whether window B reaches the boundary of window a, and it is necessary to determine whether window B reaches the boundary of window a in the corresponding direction according to the direction (up/down/left/right) in which the field of view is moving.
Step 750: execution of this step indicates that window B does not exceed the range of window a, then the focal position (a, B) is recalculated based on the field of view movement request, and window B is reselected in window a based on the calculated updated focal position (a, B). At this time, the window C will move along with the window B, so that the relative position between the two will remain unchanged.
Step 760: and judging whether a color change request exists. If yes, go to step 770. If not, return to step 610.
Step 770: and calculating the color change parameters according to the color change request.
According to the embodiment of the invention, the problem that the image is blurred under the condition of larger magnification in the single BIN mode is solved. In the original single BIN mode, after pixel combination processing, the resolution is reduced, and then interpolation amplification and reduction are carried out, the resolution becomes fuzzy under the condition of smaller amplification factor, and at the moment, the amplification is continued, so that normal reading cannot be carried out at all. In the embodiment of the invention, the ROI mode is adopted after the amplification is carried out to a certain amplification factor T, so that the definition can be improved without the step of pixel combination.
According to the embodiment of the invention, the problem that the field of view becomes extremely small under larger magnification is solved, so that a user needs to continuously move the vision aid to acquire enough effective information. By allowing the window C to be re-selected in the captured output data in accordance with a field of view movement request from the user, the user can view the internally cropped portion of the captured output data. In addition, the window B is further allowed to be moved within the window a in accordance with a view movement request from the user in the ROI mode, ensuring that the same view range as in the BIN mode can be obtained also in the ROI mode.
Fig. 5A to 5D are schematic diagrams illustrating a change in a window position according to a user field of view movement request in an image browsing method according to an embodiment of the present invention. Fig. 5A shows the original window positions. As previously described, the position of window C is adjusted based on receiving a field of view movement request from the user. If the user continuously issues a request for movement of the field of view to the right, the position of window C will correspondingly continue to move to the right until the right boundary of window B is reached as shown in fig. 5B. If the image sensor is operating in BIN mode at this time, the visual field moves until it is, and the electronic vision aid sounds a bell of "beep" indicating that the visual field has reached the boundary. Conversely, if the image sensor is operating in the ROI mode at this time, window B will move to the right as shown in fig. 5C, and window C will follow accordingly until window B reaches the right boundary of window a as shown in fig. 5D. At this time, the electronic vision aid sounds a bell of "beep" indicating that the visual field has reached the boundary.
Fig. 6 is a block diagram of an image processing module according to an embodiment of the present invention, including a clipping unit 201, an amplifying unit 202, a color changing unit 203, a focus control unit 204, a zoom control unit 205, and a color changing control unit 206. The zoom control unit 205 receives a zoom control request from the user, and calculates the magnification Mag. The focus control unit 204 receives a view movement request from the user, and calculates a focus position (a, B) of the update window B or a focus position (i, j) of the window C according to the view movement request from the user. The color change control unit 206 receives a color change request from a user and calculates a color change parameter according to the color change request from the user. The clipping unit 201 decides the size and position of the window C based on the magnification Mag and the focus position (i, j), and reads out pixel data in the window C in the acquired output data to generate the processing data to be displayed. The amplifying unit 202 receives the processed data to be displayed, and performs image amplifying processing on the processed data to be displayed based on the magnification Mag, generating amplified data. The color changing unit 203 receives the amplified data, and adjusts the contrast of the amplified data to be displayed based on the color changing parameters, to generate display data. Those skilled in the art will appreciate that the front-to-back positions of the color changing unit 203 and the amplifying unit 202 may be reversed, and further, in some embodiments, the color changing unit 203 may even be omitted.
Although the foregoing embodiments often take the example of an electronic vision aid, this is not intended to limit the invention. The invention is suitable for all real-time video acquisition systems which need to shoot and display real-time video. In addition, it should be understood that the system and method disclosed in the embodiments of the present invention may be implemented in other manners. The above described embodiments are merely exemplary, the division of the units is merely one logical functional division, and other manners of division are possible in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Furthermore, the coupling or communication connection shown or discussed as being between each other may be an indirect coupling or communication connection via some interface, device, or element, which may take electrical, mechanical, or other form.
Furthermore, it is to be understood that the terminology used is intended to be in the nature of words of description and illustration, rather than of limitation. As the present invention may be embodied in several forms without departing from the spirit or essential characteristics thereof, it should also be understood that the above-described embodiments are not limited by any of the details of the foregoing description, but rather should be construed broadly within its spirit and scope as defined in the appended claims, and therefore all changes and modifications that fall within the meets and bounds of the claims, or equivalences of such meets and bounds are therefore intended to be embraced by the appended claims.

Claims (8)

1. An electronic vision aid, comprising:
the image acquisition module comprises a lens and an image sensor, wherein the lens is used for capturing an image of a shot object, focusing and projecting the captured image to a sensing area of the image sensor, and the image sensor senses the image of the shot object captured by the lens and converts the optical image into acquisition output data;
the browsing control module is used for receiving an image scaling request and a visual field movement request from a user;
the image processing module is coupled to the image acquisition module and the browsing control module, receives the acquired output data, defines a first window in the acquired output data based on the amplification factor, reads out pixel data in the first window to generate to-be-displayed processing data, and then performs image amplification processing on the to-be-displayed processing data to obtain display data; and
the output display module receives the display data from the image processing module and displays the display data in real time;
if the browsing control module receives an image scaling request from a user, the image processing module changes the magnification factor and adjusts the size of the first window according to the image scaling request; if the browsing control module receives a visual field moving request from a user, the image processing module adjusts the position of the first window according to the visual field moving request;
in addition, the image processing module also controls the working mode of the image sensor according to the magnification;
when the magnification is larger than a preset value, the image sensor works in a windowing readout mode, a second window smaller than the sensed image is defined in the sensed image, and pixel data in the second window are read out to generate acquisition output data;
when the magnification is smaller than a preset value, the image sensor works in a pixel merging mode, and adjacent pixels in the sensed image are synthesized into one pixel so as to generate acquisition output data;
wherein in the windowed readout mode, if the first window has reached the boundary where output data is collected upon receiving a field of view movement request from the user, the image sensor will adjust the position of the second window in accordance with the field of view movement request of the user.
2. The electronic viewing aid of claim 1, wherein the viewing control module further provides color change control such that visually impaired persons can manipulate the image processing module to adjust the contrast of the image via the viewing control module.
3. The electronic visual aid of claim 1, wherein the output display module is a display screen or an HDMI module connected to an external display device.
4. The electronic viewfinder of claim 1, wherein in the binning mode, the image sensor defines a third window within the sensed image and combines adjacent pixels within the third window into one pixel to generate the acquired output data.
5. The electronic visual aid of claim 1, wherein the image processing module comprises:
a zoom control unit that receives a zoom control request from a user and calculates a magnification factor according to the zoom control request from the user;
a focus control unit that receives a field of view movement request from a user and calculates a focus position of the first window or the second window according to the field of view movement request from the user;
the color change control unit receives an image color change request from a user and calculates color change parameters according to the image color change request from the user;
the shearing unit is coupled to the scaling control unit and the focus control unit, determines the size and the position of the first window based on the magnification and the focus position of the first window, and reads out pixel data in the first window in the acquired output data to generate processing data to be displayed;
the amplifying unit is coupled to the shearing unit and the scaling control unit, receives the data to be displayed and processes the image amplification of the data to be displayed according to the amplification factor, and generates amplified data; and
And the color changing unit is coupled to the amplifying unit and the color changing control unit, receives the amplified data, and adjusts the contrast of the amplified data according to the color changing parameters to generate display data.
6. An image browsing method for an electronic vision aid, comprising:
capturing an image of a photographed object through a lens, and focusing and projecting the captured image to a sensing area of an image sensor;
the method comprises the steps that an image sensor senses an image of a shot object captured by a lens, and an optical image is converted into acquisition output data;
defining a first window in the acquired output data based on the magnification, and reading out pixel data in the first window to generate to-be-displayed processed data;
performing image amplification processing on the processing data to be displayed to acquire display data;
displaying the display data in real time through an output display module;
judging whether a field of view movement request from a user is received; and
if a visual field movement request from a user is received, adjusting the position of the first window according to the visual field movement request;
the image browsing method further comprises the following steps:
judging whether an image zooming request from a user is received or not;
if an image scaling request from a user is received, changing the magnification factor and adjusting the size of a first window according to the image scaling request;
judging whether the magnification is smaller than a preset value or not;
if the magnification is smaller than a preset value, the image sensor is enabled to work in a pixel merging mode, and adjacent pixels in the sensed image are synthesized into one pixel so as to generate acquisition output data; and
if the magnification is larger than a preset value, the image sensor is enabled to work in a windowing readout mode, a second window smaller than the sensed image is defined in the sensed image, and pixel data in the second window are read out to generate acquisition output data;
the image browsing method further comprises the following steps:
when receiving a visual field movement request from a user, judging whether the image sensor works in a windowing reading mode or not;
if the image sensor works in a windowing read-out mode, judging whether a first window reaches the boundary of acquired output data or not; and
and if the first window reaches the boundary of the acquired output data, adjusting the position of the second window according to the visual field movement request.
7. The image browsing method of claim 6, further comprising:
judging whether a color change request from a user is received or not;
if a color change request is received from a user, calculating a color change parameter according to the color change request.
8. An electronic vision aid, comprising:
the image acquisition module comprises a lens and an image sensor, wherein the lens is used for capturing an image of a shot object, focusing and projecting the captured image to a sensing area of the image sensor, and the image sensor senses the image of the shot object captured by the lens and converts the optical image into acquisition output data;
the browsing control module receives a field of view movement request from a user;
the image processing module is coupled to the image acquisition module, receives the acquired output data, defines a first window in the acquired output data based on the magnification, reads out pixel data in the first window to generate to-be-displayed processing data, and then performs image amplification processing on the to-be-displayed processing data to obtain display data; and
The output display module receives the display data from the image processing module and displays the display data in real time;
the image sensor works in a windowing read-out mode, a second window smaller than the sensed image is defined in the sensed image, and pixel data in the second window are read out to generate acquisition output data;
if the browsing control module receives a visual field movement request from a user, the image processing module adjusts the position of the first window according to the visual field movement request, wherein if the first window reaches the boundary for collecting output data when the visual field movement request is received, the image sensor adjusts the position of the second window.
CN201810222641.XA 2018-03-19 2018-03-19 Electronic vision aid and image browsing method for same Active CN108259838B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810222641.XA CN108259838B (en) 2018-03-19 2018-03-19 Electronic vision aid and image browsing method for same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810222641.XA CN108259838B (en) 2018-03-19 2018-03-19 Electronic vision aid and image browsing method for same

Publications (2)

Publication Number Publication Date
CN108259838A CN108259838A (en) 2018-07-06
CN108259838B true CN108259838B (en) 2024-01-19

Family

ID=62747117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810222641.XA Active CN108259838B (en) 2018-03-19 2018-03-19 Electronic vision aid and image browsing method for same

Country Status (1)

Country Link
CN (1) CN108259838B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109324417A (en) * 2018-12-13 2019-02-12 宜视智能科技(苏州)有限公司 Typoscope and its control method, computer storage medium
CN109670445B (en) * 2018-12-19 2023-04-07 宜视智能科技(苏州)有限公司 Low-vision-aiding intelligent glasses system
CN112693398A (en) * 2020-12-31 2021-04-23 浙江合众新能源汽车有限公司 A-column image display system and method based on electronic exterior rearview mirror
CN113393404A (en) * 2021-07-19 2021-09-14 艾视雅健康科技(苏州)有限公司 Low-vision head-wearing electronic auxiliary vision equipment and image modification method thereof
CN114579074A (en) * 2022-03-17 2022-06-03 北京翠鸟视觉科技有限公司 Interactive screen projection method for typoscope, computer storage medium and typoscope

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040075517A (en) * 2003-02-21 2004-08-30 엘지전자 주식회사 apparatus and method for zoom of display device
CN102106145A (en) * 2008-07-30 2011-06-22 三星电子株式会社 Apparatus and method for displaying an enlarged target region of a reproduced image
CN105450783A (en) * 2016-01-18 2016-03-30 杭州瑞杰珑科技有限公司 A multifunctional desktop typoscope
CN106126100A (en) * 2016-06-24 2016-11-16 青岛海信移动通信技术股份有限公司 A kind of terminal screen display packing and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NZ518092A (en) * 2002-03-28 2004-11-26 Pulse Data Internat Ltd Low vision video magnifier viewing apparatus having digital zoom feature

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040075517A (en) * 2003-02-21 2004-08-30 엘지전자 주식회사 apparatus and method for zoom of display device
CN102106145A (en) * 2008-07-30 2011-06-22 三星电子株式会社 Apparatus and method for displaying an enlarged target region of a reproduced image
CN105450783A (en) * 2016-01-18 2016-03-30 杭州瑞杰珑科技有限公司 A multifunctional desktop typoscope
CN106126100A (en) * 2016-06-24 2016-11-16 青岛海信移动通信技术股份有限公司 A kind of terminal screen display packing and device

Also Published As

Publication number Publication date
CN108259838A (en) 2018-07-06

Similar Documents

Publication Publication Date Title
CN108259838B (en) Electronic vision aid and image browsing method for same
US11228748B2 (en) Application processor for disparity compensation between images of two cameras in digital photographing apparatus
US11758265B2 (en) Image processing method and mobile terminal
US7839446B2 (en) Image capturing apparatus and image display apparatus including imparting distortion to a captured image
JP4546565B2 (en) Digital image processing
US8149280B2 (en) Face detection image processing device, camera device, image processing method, and program
TWI531852B (en) Device of capturing images and method of digital focusing
CN109756668B (en) Combining optical zoom and digital zoom under different image capture conditions
CN114143464A (en) Image pickup apparatus and setting screen thereof
EP2592822A2 (en) Zoom control method and apparatus, and digital photographing apparatus
TWI539226B (en) Object-tracing image processing method and system thereof
US20110069156A1 (en) Three-dimensional image pickup apparatus and method
TWI629550B (en) Image capturing apparatus and image zooming method thereof
CN112532808A (en) Image processing method and device and electronic equipment
US20100020202A1 (en) Camera apparatus, and image processing apparatus and image processing method
CN111818304A (en) Image fusion method and device
JP2009010616A (en) Imaging device and image output control method
KR101038815B1 (en) Image capture system capable of fast auto focus
US8427555B2 (en) Imaging apparatus for displaying an area wider than a recording area
JP2007096588A (en) Imaging device and method for displaying image
EP1829361A1 (en) Method for extracting of multiple sub-windows of a scanning area by means of a digital video camera
US7515191B2 (en) Digital camera and solid-state image pickup unit
US11095824B2 (en) Imaging apparatus, and control method and control program therefor
CN107147848B (en) Automatic focusing method and real-time video acquisition system adopting same
KR20110090098A (en) Apparatus for processing digital image and thereof method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant