CN115379121A - Method for optimizing image preview results of fundus camera and related product - Google Patents

Method for optimizing image preview results of fundus camera and related product Download PDF

Info

Publication number
CN115379121A
CN115379121A CN202211269310.4A CN202211269310A CN115379121A CN 115379121 A CN115379121 A CN 115379121A CN 202211269310 A CN202211269310 A CN 202211269310A CN 115379121 A CN115379121 A CN 115379121A
Authority
CN
China
Prior art keywords
position information
parameter
image
eye image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211269310.4A
Other languages
Chinese (zh)
Other versions
CN115379121B (en
Inventor
陈荡荡
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Airdoc Technology Co Ltd
Original Assignee
Beijing Airdoc Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Airdoc Technology Co Ltd filed Critical Beijing Airdoc Technology Co Ltd
Priority to CN202211269310.4A priority Critical patent/CN115379121B/en
Publication of CN115379121A publication Critical patent/CN115379121A/en
Application granted granted Critical
Publication of CN115379121B publication Critical patent/CN115379121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The application discloses a method and related product for optimizing an image preview result of a fundus camera. The method comprises the following steps: acquiring first position information of the pupil center of the eye in the first eye image and second position information in the second eye image; calculating a first interception parameter and a second interception parameter related to optimization according to the first position information and the second position information; and respectively intercepting the first eye image and the second eye image by using the first interception parameter and the second interception parameter so as to correspondingly obtain a first intercepted image taking the first position information as the center and a second intercepted image taking the second position information as the center, so as to optimize an image preview result of the fundus camera. By means of the scheme, the image preview result of the eye image shot by the double cameras which are not precisely adjusted can be optimized, and the eye pupils are located in the middlest of the image after the working distance is aligned.

Description

Method for optimizing image preview results of fundus camera and related product
Technical Field
The present application relates generally to the field of automated fundus camera technology. More particularly, the present application relates to a method, apparatus, and computer-readable storage medium for optimizing an image preview result of a fundus camera.
Background
Automatic fundus cameras typically locate the pupils of the eye through a binocular camera and align a main barrel to the Working Distance ("WD") of the fundus camera to photograph the eye at the Working Distance of the fundus camera. In general, in order to facilitate the recognition and understanding of the working distance alignment process by the operator, precise position adjustment is performed when the binocular fundus camera is installed so that the pupil is positioned at the center of the screen when the working distances are aligned. For this reason, it is also described explicitly in the teaching materials related to the operation of the fundus camera that the upper and lower pupils of the binocular camera are aligned and spliced to form a complete "pupil", and the working distance is aligned to achieve an ideal state.
However, performing precise position adjustment for the installation of the binocular fundus camera not only increases the complexity of the apparatus structure, but also brings a great deal of work for production line production. At present, in order to simplify the production flow and reduce the complexity of installation and debugging of the auxiliary camera, a position adjusting bracket of a binocular camera is not designed for some fundus cameras. Therefore, the pupils are not necessarily at the middle of the image after the working distances are aligned, and the positions of the pupils after the working distances are aligned are different due to errors in binocular camera assembly of each device. In these cases, when the operator views the preview image of the sub-camera, the operator cannot intuitively know the working distance of the fundus camera, which affects the understanding of the operator on the working state of the fundus camera.
Disclosure of Invention
To at least partially solve the technical problems mentioned in the background, the present application provides a scheme for optimizing an image preview result of a fundus camera. By means of the scheme, the image preview result of the eye images shot by the double cameras which are not precisely adjusted can be optimized, and the eye pupils are located in the middlest of the images when the working distances are aligned. To this end, the present application provides solutions in a number of aspects as follows.
In a first aspect, the present application provides a method for optimizing an image preview result of a fundus camera, the fundus camera including at least two sub-cameras, and the two sub-cameras respectively taking a first eye image and a second eye image, wherein the method includes: acquiring first position information of the pupil center of the eye part in the first eye image and second position information of the pupil center of the eye part in the second eye image based on calibration parameters of the working distance of a fundus camera; calculating a first interception parameter and a second interception parameter related to optimization according to the first position information and the second position information; and respectively intercepting the first eye image and the second eye image by using the first interception parameter and the second interception parameter so as to correspondingly obtain a first intercepted image taking the first position information as the center and a second intercepted image taking the second position information as the center, so as to optimize an image preview result of the fundus camera.
In one embodiment, wherein the first truncation parameter includes first truncation positioning information and first truncation size information, the second truncation parameter includes second truncation positioning information and second truncation size information, and calculating the first truncation parameter and the second truncation parameter associated with the optimization based on the first location information and the second location information includes: determining a width parameter and a height parameter of the intercepting frame according to the first position information and the second position information; and calculating the first truncated positioning information and the first truncated size information and the second truncated positioning information and the second truncated size information based on the first position information, the second position information, the width parameter, and the height parameter.
In another embodiment, wherein determining the width parameter and the height parameter of the intercept box based on the first location information and the second location information comprises: respectively calculating the minimum distance from the pupil center to the vertical boundary of the first eye image and the minimum distance from the pupil center to the horizontal boundary of the first eye image according to the first position information; respectively calculating the minimum distance from the pupil center to the vertical boundary of the second eye image and the minimum distance from the pupil center to the horizontal boundary of the second eye image according to the second position information; and determining a width parameter and a height parameter of the intercepting frame based on the minimum distance from the pupil center to the vertical boundary of each eye image and the minimum distance from the horizontal boundary of each eye image.
In yet another embodiment, wherein determining the width parameter and the height parameter of the intercept box based on the minimum distance of the pupil center from the vertical boundary and the minimum distance from the horizontal boundary of each eye image comprises: determining the minimum value between the minimum distance from the pupil center to the vertical boundary of the first eye image and the minimum distance from the vertical boundary of the second eye image as the width parameter of the intercepting frame; and determining the minimum value between the minimum distance from the pupil center to the horizontal boundary of the first eye image and the minimum distance from the horizontal boundary of the second eye image as the height parameter of the intercepting frame.
In yet another embodiment, wherein calculating the first truncated positioning information and the first truncated size information and the second truncated positioning information and the second truncated size information based on the first position information, the second position information, the width parameter, and the height parameter comprises: calculating the first and second truncated positioning information based on the first position information, the second position information, the width parameter, and the height parameter, respectively; and calculating the first and second clipping size information based on the width parameter and the height parameter, respectively.
In yet another embodiment, wherein calculating the first truncated positioning information and the second truncated positioning information based on the first position information, the second position information, the width parameter, and the height parameter, respectively, comprises: and moving the first position information and the second position information along the width and the longitudinal direction respectively by corresponding width parameters and corresponding height parameters to obtain the first interception positioning information and the second interception positioning information.
In yet another embodiment, wherein calculating the first and second truncation size information based on the width parameter and the height parameter, respectively, comprises: multiplying the width parameter and the height parameter by a preset multiple respectively to calculate the first cut-off size information and the second cut-off size information.
In yet another embodiment, the method further comprises: cropping the first and second cropped images into upper and lower half regions, respectively, in a horizontal direction in response to the fundus camera performing working distance alignment; and splicing the upper half area of the first intercepted image and the lower half area of the second intercepted image, so that a complete pupil is spliced after the working distances in the fundus camera are aligned, and an optimized image preview result is displayed.
In a second aspect, the present application provides an apparatus for optimizing an image preview result of a fundus camera, comprising: a processor; and a memory storing program instructions for optimizing an image preview result of a fundus camera, which when executed by the processor, cause the apparatus to implement embodiments of the foregoing first aspect.
In a third aspect, the present application provides a computer readable storage medium having stored thereon computer readable instructions for optimizing image preview results for a fundus camera, which when executed by one or more processors implement embodiments of the foregoing first aspect.
According to the scheme of the application, corresponding first and second interception parameters are calculated according to first and second position information of a pupil center calibrated according to the working distance in the eye images shot by the two cameras respectively, and then the eye images shot by the two cameras respectively are intercepted through the first and second interception parameters so as to correspondingly obtain a first intercepted image taking the first position information as the center and a second intercepted image taking the second position information as the center. This can make the eye ground camera finish the working distance when aliging, the pupil centre of eye is in the most middle position of each eye image for operating personnel can learn the state of eye ground camera working distance directly perceivedly when watching the image of two cameras, thereby understands the state of eye ground camera working distance. Further, the embodiment of the application can splice the upper half area of the first intercepted image and the lower half area of the second intercepted image when the fundus camera performs working distance alignment, so that a complete pupil is spliced after the working distances in the fundus camera are aligned, and a standard for judging whether the working distances are aligned is provided for an operator. Based on this, through the scheme that utilizes this application, "readable" and "intelligibility" that can improve two camera preview images have reduced operating personnel's the use degree of difficulty.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings. Several embodiments of the present application are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
fig. 1 is an exemplary schematic diagram showing a configuration of a binocular fundus camera portion;
fig. 2 is an exemplary flowchart diagram illustrating a method for optimizing an image preview result of a fundus camera according to an embodiment of the present application;
FIG. 3 is an exemplary diagram illustrating obtaining a truncated image according to an embodiment of the application;
FIG. 4 is an exemplary diagram illustrating a stitching effect presentation according to an embodiment of the present application;
FIG. 5 is an exemplary result diagram illustrating image preview result optimization according to an embodiment of the present application;
fig. 6 is an exemplary flowchart block diagram showing an entirety for optimizing an image preview result of a fundus camera according to an embodiment of the present application; and
fig. 7 is a block diagram showing an exemplary configuration of an apparatus for optimizing an image preview result of a fundus camera according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings. It should be understood that the embodiments described herein are only some of the embodiments provided for the convenience of clear understanding of the aspects and compliance with legal requirements of the application, and not all embodiments of the application may be implemented. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed in the present specification without any inventive step, shall fall within the scope of protection of the present application.
Fig. 1 is an exemplary schematic diagram showing a configuration of a binocular fundus camera portion. As shown in fig. 1, the binocular fundus camera may include at least two sub-cameras 101. In addition, the binocular fundus camera may further include a main camera 102. Wherein the two aforementioned sub-cameras 101 may be disposed on both sides of the main camera 102 for obtaining a first eye image (e.g., shown in the left drawing of fig. 5 a) and a second eye image (e.g., shown in the right drawing of fig. 5 a) with respect to the eyes from different angle shots, respectively. The aforementioned main camera 102 is used to capture a fundus image. In an application scenario, the pupil may be located according to the positions of the eye pupils in the first eye image and the second eye image. Based on the positioning information, the fundus camera may move the main camera 102 to a working distance of the fundus camera, and then take a fundus image at the working distance.
As can be seen from the description of the background art, for an eye image obtained without fine position adjustment, the pupil is not necessarily at the center of the image even after the working distances are aligned. Further, due to the fact that errors exist in assembly of the binocular cameras of each device, the positions of pupils are different after the working distances are aligned. This makes it impossible for the operator to intuitively know the state of the working distance of the fundus camera when viewing the preview image of the sub-camera.
Based on the above, the scheme for optimizing the image preview result of the auxiliary camera in the fundus camera is provided, so that the pupil is at the middle after the optimized preview image is aligned with the working distance, and the readability and the understandability of the dual-camera preview image are improved.
Fig. 2 is an exemplary flow diagram illustrating a method 200 for optimizing image preview results for a fundus camera according to an embodiment of the present application. As shown in fig. 2, at step 202, first position information of a pupil center of an eye portion in a first eye image and second position information in a second eye image are acquired based on a calibration parameter for a working distance of a fundus camera. In one implementation scenario, the aforementioned first eye image and second eye image may be respectively captured from different angles by two sub-cameras (such as shown in fig. 1 described above) in the fundus camera. In one embodiment, the first position information and the second position information may be obtained by calibrating the position of the working distance at the time of shipment based on a binocular fundus camera for performing visual navigation. After the position calibration, the positions of the centers of the pupils in the respective image coordinate systems of the two sub-cameras (including the first sub-camera and the second sub-camera) of the binocular system after the working distances are aligned are recorded, for example, as (x 1, y1, x2, y 2). Wherein, (x 1, y 1) is first position information of the pupil center on a first eye image shot by a first sub-camera when the pupil is located at the working distance; and (x 2, y 2) is second position information of the pupil center on a second eye image shot by the second secondary camera when the pupil is located at the working distance.
Based on the obtained first location information and the second location, at step 204, a first truncation parameter and a second truncation parameter associated with the optimization are calculated based on the first location information and the second location information. In one embodiment, the first truncation parameter includes first truncation positioning information and first truncation size information, and the second truncation parameter includes second truncation positioning information and second truncation size information. It is to be understood that the foregoing truncation parameter includes information of the truncation box. Specifically, the foregoing cut positioning information and cut size information respectively refer to the position (including the abscissa and the ordinate) and the size (including the width and the height) of the cut frame. Particularly, the capturing of the positioning information refers to the position of a corner point at the upper left corner of the capturing frame.
As an example, assume that the first truncation parameter is denoted as Rect1 and the second truncation parameter Rect2, respectively, where Rect1= (c) ((m))
Figure 561675DEST_PATH_IMAGE001
), Rect2=(
Figure 795210DEST_PATH_IMAGE002
) Then (1)
Figure 94605DEST_PATH_IMAGE003
) And (a)
Figure 806209DEST_PATH_IMAGE004
) Respectively represent the first interception positioning information and the second interception positioning information, namely the positions of the angular points of the upper left corners of the two interception frames.
Figure 779719DEST_PATH_IMAGE005
And
Figure 121838DEST_PATH_IMAGE006
Figure 970846DEST_PATH_IMAGE007
and
Figure 548458DEST_PATH_IMAGE008
respectively representing the first clipping size information and the second clipping size information, i.e. the width and height of the two clipping boxes.
In one embodiment, the width parameter and the height parameter of the capture frame may be first determined according to the first position information and the second position information, and then the first capture positioning information and the first capture size information and the second capture positioning information and the second capture size information may be calculated based on the first position information, the second position information, the width parameter, and the height parameter. More specifically, when the width parameter and the height parameter of the capture frame are captured, the minimum distance from the pupil center to the vertical boundary of the first eye image and the minimum distance from the horizontal boundary of the first eye image may be calculated according to the first position information, the minimum distance from the pupil center to the vertical boundary of the second eye image and the minimum distance from the horizontal boundary of the second eye image may be calculated according to the second position information, and the width parameter and the height parameter of the capture frame may be determined based on the minimum distance from the pupil center to the vertical boundary of each eye image and the minimum distance from the horizontal boundary of each eye image.
Further, a minimum value between a minimum distance of a pupil center from a vertical boundary of the first eye image and a minimum distance from a vertical boundary of the second eye image is determined as a width parameter of the intercept frame and a minimum value between a minimum distance of a pupil center from a horizontal boundary of the first eye image and a minimum distance from a horizontal boundary of the second eye image is determined as a height parameter of the intercept frame. That is to say, in the embodiment of the present application, the minimum distance between the center of the pupil and the left and right boundaries of the two eye images and the minimum distance between the center of the pupil and the upper and lower boundaries of the two eye images are calculated, and then the minimum value of the minimum distances between the center of the pupil and the left and right boundaries of the two eye images and the minimum value of the minimum distances between the center of the pupil and the horizontal boundaries of the two eye images are determined as the width parameter and the height parameter of the capture frame, respectively.
For example, in an exemplary scenario, assume that the first position information is (x 1, y 1) and the width and height of the first eye image are respectively recorded as (x 1, y 1)
Figure 799310DEST_PATH_IMAGE009
And
Figure 312331DEST_PATH_IMAGE010
. In this scenario, first, the minimum distance from the pupil center to the vertical (left and right) boundary of the first eye image and the minimum distance from the horizontal (upper and lower) boundary of the first eye image may be calculated, respectively. For example, assume that the minimum distance of the pupil center from the vertical (left and right) boundary of the first eye image is recorded as
Figure 648635DEST_PATH_IMAGE011
Assuming that the minimum distance from the pupil center to the horizontal (upper and lower) boundary of the first eye image is recorded as
Figure 780670DEST_PATH_IMAGE012
Then, then
Figure 886029DEST_PATH_IMAGE013
Figure 569951DEST_PATH_IMAGE014
In another exemplary scenario, assuming the second position information is (x 2, y 2), the width and height of the second eye image are (x 2, y 2) respectively
Figure 455868DEST_PATH_IMAGE015
And
Figure 578545DEST_PATH_IMAGE016
. In this scenario, the minimum distance from the pupil center to the vertical (left and right) boundary of the second eye image and the minimum distance from the horizontal (upper and lower) boundary of the second eye image may be calculated, respectively, first. For example, assume that the minimum distance of the pupil center from the vertical (left and right) boundary of the second eye image is recorded as
Figure 476093DEST_PATH_IMAGE017
Assuming that the minimum distance from the pupil center to the horizontal (upper and lower) boundary of the second eye image is recorded as
Figure 970398DEST_PATH_IMAGE018
Then, then
Figure 281293DEST_PATH_IMAGE019
Figure 879765DEST_PATH_IMAGE020
From the foregoing, two eye images of the distance between the centers of the pupils are obtainedMinimum distance of left and right boundaries of image
Figure 694137DEST_PATH_IMAGE021
Figure 110075DEST_PATH_IMAGE022
And a minimum distance from upper and lower boundaries of the two eye images
Figure 642687DEST_PATH_IMAGE023
Figure 779271DEST_PATH_IMAGE024
Then, the minimum value of the minimum distances from the left and right boundaries of the two eye images and the minimum distance from the horizontal boundary of the two eye images may be determined as the width parameter and the height parameter of the capture frame, respectively. For example, assume that the width parameter of the intercept box is noted
Figure 792357DEST_PATH_IMAGE025
The height parameter of the capture frame is recorded as
Figure 316879DEST_PATH_IMAGE026
Then, then
Figure 8892DEST_PATH_IMAGE027
Figure 604958DEST_PATH_IMAGE028
Based on the width parameter and the height parameter of the capturing frame obtained above, the first capturing positioning information and the first capturing size information, and the second capturing positioning information and the second capturing size information can be calculated by combining the first position information and the second position information. In one embodiment, the first and second truncated location information may be calculated based on the first and second location information, the width parameter, and the height parameter, respectively, and the first and second truncated size information may be calculated based on the width parameter and the height parameter, respectively.
Specifically, the first position information and the second position information are respectively moved by the corresponding width parameter and the corresponding height parameter in the transverse direction and the longitudinal direction to obtain first cut-off positioning information and second cut-off positioning information. Further, the width parameter and the height parameter are multiplied by a preset multiple, respectively, to calculate first cut-out size information and second cut-out size information.
The first position information and the second position information are respectively recorded as (x 1, y 1) and (x 2, y 2), and the width parameter and the height parameter of the intercepting frame are respectively recorded as
Figure 393923DEST_PATH_IMAGE029
And
Figure 761450DEST_PATH_IMAGE030
for example. Suppose the first truncated positioning information is noted as (
Figure 268655DEST_PATH_IMAGE031
) The second intercepted positioning information is marked as (
Figure 386521DEST_PATH_IMAGE032
) Then, then
Figure 29992DEST_PATH_IMAGE033
And
Figure 834000DEST_PATH_IMAGE034
can be respectively
Figure 562922DEST_PATH_IMAGE035
Figure 235212DEST_PATH_IMAGE036
Figure 998768DEST_PATH_IMAGE037
And
Figure 708098DEST_PATH_IMAGE038
can be respectively
Figure 268524DEST_PATH_IMAGE039
Figure 416608DEST_PATH_IMAGE040
. That is, (x 1, y 1) and (x 2, y 2) are moved in the lateral and longitudinal directions, respectively
Figure 972355DEST_PATH_IMAGE041
And
Figure 914903DEST_PATH_IMAGE042
obtaining the positioning information of the two capturing frames, namely the positions of the corner points of the upper left corners of the two capturing frames, which respectively correspond to
Figure 680734DEST_PATH_IMAGE043
And
Figure 632509DEST_PATH_IMAGE044
. Further, it is assumed that the first truncation size information and the second truncation size information are respectively noted as
Figure 42762DEST_PATH_IMAGE045
And
Figure 998954DEST_PATH_IMAGE046
Figure 189764DEST_PATH_IMAGE047
and
Figure 617334DEST_PATH_IMAGE048
then, then
Figure 209990DEST_PATH_IMAGE049
And
Figure 822237DEST_PATH_IMAGE050
may all be
Figure 234763DEST_PATH_IMAGE051
Figure 466025DEST_PATH_IMAGE052
And
Figure 913186DEST_PATH_IMAGE053
may all be
Figure 447067DEST_PATH_IMAGE054
. That is, the size (including width and height) of the capture box is
Figure 815732DEST_PATH_IMAGE055
Figure 647421DEST_PATH_IMAGE054
And determining a first interception parameter and a second interception parameter according to the obtained first interception positioning information, second interception positioning information, first interception size information and second interception size information. As previously mentioned, it is assumed that the first truncation parameter is denoted as Rect1 and the second truncation parameter Rect2, respectively, where Rect1= (c) ((m))
Figure 11407DEST_PATH_IMAGE001
), Rect2=(
Figure 903139DEST_PATH_IMAGE002
) Then Rect1= (g) ((g))
Figure 227941DEST_PATH_IMAGE056
Figure 863322DEST_PATH_IMAGE057
), Rect2=(
Figure 596661DEST_PATH_IMAGE058
)。
After the first and second clipping parameters are obtained, in step 206, the first and second eye images are respectively clipped by using the first and second clipping parameters, so as to correspondingly obtain a first clipped image centered on the first position information and a second clipped image centered on the second position information, thereby optimizing the image preview result of the fundus camera. In one implementation scene, the first eye image and the second eye image are respectively intercepted through the intercepting frames determined by the first intercepting parameter and the second intercepting parameter, so that pupils are positioned in the middlest after the intercepted images are aligned at the working distance, and the image preview results of two auxiliary cameras in the fundus camera are optimized. The foregoing optimization process will be described in detail later in conjunction with fig. 3.
As can be seen from the above description, in the embodiment of the present application, the corresponding clipping parameters are respectively calculated according to the position information of the pupil center in each eye image when the working distance is calibrated, that is, the position and the size of the corner point at the upper left corner of the corresponding clipping frame are calculated. Then, the corresponding interception parameters are used for respectively intercepting each eye image so as to obtain an intercepted image taking the pupil center as the center. Therefore, when the working distances of the fundus cameras are aligned, pupils in eye images shot by the two auxiliary cameras are located in the middlest positions, so that an operator can intuitively know the working distance state of the fundus cameras when watching the preview images of the two cameras, and the working distance state of the fundus cameras can be understood.
In one embodiment, the embodiment of the present application further provides that when the fundus camera performs working distance alignment, the first captured image and the second captured image are respectively cut into an upper half area and a lower half area along the horizontal direction, and the upper half area of the first captured image and the lower half area of the second captured image are spliced, so that a complete pupil is spliced after the working distances in the fundus camera are aligned, and an optimized image preview result is displayed.
It can be understood that, the fundus camera adopting the binocular camera for working distance positioning can splice the upper part and the lower part of the two images shot by the two cameras into a complete pupil as a mark for finishing alignment of the working distance of the fundus camera, and an operator can judge whether the working distance is aligned according to whether the effect is achieved. Therefore, the pupil formed by splicing the upper part and the lower part of the two images improves the readability and the understandability of the preview images of the two cameras, so that the operating personnel can conveniently understand the alignment state of the working distances of the fundus cameras, and the using difficulty of the operating personnel is reduced.
Fig. 3 is an exemplary diagram illustrating obtaining a truncated image according to an embodiment of the present application. As shown in fig. 3, it is assumed that (a) in fig. 3 illustrates a first eye image 301 taken by a first sub-camera when the fundus camera is at a working distance, and (b) in fig. 3 illustrates a second eye image 302 taken by a second sub-camera when the fundus camera is at a working distance. In the image preview, the eye pupil 303 is shifted to the left in the first eye image 301, and the eye pupil 303 is shifted to the right in the second eye image 302. In an implementation scenario, the image preview result can be optimized by using the embodiment of the application, so that when the working distances of the fundus cameras are aligned, the pupil is located in the middle of each image.
Specifically, first position information (x 1, y 1) of the pupil center of the eye pupil 303 on the first eye image 301 and second position information (x 2, y 2) on the second eye image 302, respectively, may be acquired first from calibration parameters at the time of shipment of the fundus camera. Then, a first truncation parameter Rect1 =: (respectively) is determined from each position information
Figure 659294DEST_PATH_IMAGE001
) And a second truncation parameter Rect2= (g =: (b))
Figure 471393DEST_PATH_IMAGE002
). As mentioned above, (A) a
Figure 644885DEST_PATH_IMAGE003
) And (a)
Figure 983463DEST_PATH_IMAGE004
) The first truncated positioning information and the second truncated positioning information are respectively shown, that is, the positions of the corner points (for example, shown by solid dots in the figure) at the upper left corners of the two truncated boxes (for example, shown by dashed boxes in the figure).
Figure 216998DEST_PATH_IMAGE045
And
Figure 516392DEST_PATH_IMAGE059
Figure 572204DEST_PATH_IMAGE060
and
Figure 702971DEST_PATH_IMAGE048
respectively representing the first clipping size information and the second clipping size information, i.e. the width and height of the two clipping boxes.
In an implementation scenario, the width parameter and the height parameter of the capture frame may be determined according to the first position information and the second position information, and then the first capture positioning information and the first capture size information, and the second capture positioning information and the second capture size information may be calculated based on the first position information, the second position information, the width parameter, and the height parameter. Wherein the width parameter and the height parameter of the capture frame are determined by the minimum value of the minimum distances from the pupil center to the left and right boundaries of the two eye images and the minimum distance from the horizontal boundary of the two eye images.
For example, assume that the width and height of the first-eye image 301 are respectively noted as
Figure 310670DEST_PATH_IMAGE009
And
Figure 894098DEST_PATH_IMAGE010
the width and height of the second eye image 302 are
Figure 737289DEST_PATH_IMAGE061
And
Figure 988142DEST_PATH_IMAGE062
then the minimum distance of the pupil center from the vertical (left and right) boundary of the first eye image 301
Figure 501163DEST_PATH_IMAGE063
Minimum distance of pupil center from horizontal (upper and lower) boundary of first eye image 301
Figure 571887DEST_PATH_IMAGE064
. Further, the minimum distance of the pupil center from the vertical (left and right) boundary of the second eye image 302
Figure 468036DEST_PATH_IMAGE019
Minimum distance of pupil center from horizontal (upper and lower) boundary of second eye image 302
Figure 511079DEST_PATH_IMAGE065
. Under the scene, the width parameter of the interception frame can be obtained
Figure 257318DEST_PATH_IMAGE066
Height parameter of intercepting frame
Figure 408814DEST_PATH_IMAGE067
Then, first truncated location information (b) can be calculated according to the first position information, the second position information, the width parameter and the height parameter
Figure 203594DEST_PATH_IMAGE003
) And first cut size information
Figure 429039DEST_PATH_IMAGE068
Figure 159229DEST_PATH_IMAGE059
And second intercepting the positioning information (
Figure 204545DEST_PATH_IMAGE004
) And second truncated size information
Figure 803017DEST_PATH_IMAGE069
Figure 476444DEST_PATH_IMAGE048
. As an example, the aforementioned first truncated positioning information (f)
Figure 502169DEST_PATH_IMAGE003
) Is composed of
Figure 34781DEST_PATH_IMAGE070
Second intercepting the positioning information (
Figure 810845DEST_PATH_IMAGE004
) Is composed of
Figure 745303DEST_PATH_IMAGE071
. Widths corresponding to the first cut size information and the second cut size information
Figure 941929DEST_PATH_IMAGE068
And
Figure 961838DEST_PATH_IMAGE069
may all be
Figure 292325DEST_PATH_IMAGE072
The height corresponding to the first cut size information and the second cut size information
Figure 81289DEST_PATH_IMAGE059
And
Figure 448817DEST_PATH_IMAGE048
may all be
Figure 956021DEST_PATH_IMAGE073
. The first eye image 301 and the second eye image 302 can be respectively clipped by using the aforementioned clipping parameters to correspondingly obtain clipped images, such as the first clipped image and the second clipped image shown in the dashed boxes in (a) diagram and (b) diagram in fig. 3.
In one embodiment, the first and second truncated images may be cropped in the horizontal direction into upper and lower half regions, respectively, and the upper half region of the first truncated image and the lower half region of the second truncated image are spliced, so that a complete pupil can be spliced, for example, as shown in fig. 4.
Fig. 4 is an exemplary schematic diagram illustrating a stitching effect presentation according to an embodiment of the present application. As an upper half area 401 of the first cutout image in fig. 4, and a lower half area 402 of the second cutout image in fig. 4. After the fundus camera completes the working distance alignment, the upper half 401 of the first truncated image and the lower half 402 of the second truncated image may stitch out a complete pupil.
Fig. 5 is an exemplary result diagram illustrating image preview result optimization according to an embodiment of the present application. As shown in the left side of the drawing (a) of fig. 5, a first eye image captured by the first sub-camera is shown, and as shown in the right side of the drawing (a) of fig. 5, a second eye image captured by the second sub-camera is shown. After the optimization of the embodiment of the application, the first and second cut-out images can be obtained, and the upper half area of the first cut-out image shown in the left diagram of the (b) diagram of fig. 5 and the lower half area of the second cut-out image shown in the right diagram of the (b) diagram of fig. 5 are obtained by cutting the first and second cut-out images in the horizontal direction. As can be seen from the figure, the pupil in the intercepted image is at the most middle position. Fig. 5 (c) further illustrates a stitching effect diagram of the upper half region of the first cutout image and the lower half region of the second cutout image. As can be seen from the figure, after the fundus camera finishes the alignment of the working distance, the upper half area of the first intercepted image and the lower half area of the second intercepted image can splice a complete pupil.
Fig. 6 is an exemplary flowchart block diagram showing an entirety for optimizing an image preview result of a fundus camera according to an embodiment of the present application. As shown in fig. 6, at step 602, calibration parameters at the time of shipment of the fundus camera, for example, (x 1, y1, x2, y 2) are acquired. Next, at step 604, the position information of the pupil center in each eye image, including the first position information (x 1, y 1) and the second position information (x 2, y 2), may be obtained through the aforementioned calibration parameters.
After obtaining the aforementioned position information, atAt step 606, a first truncation parameter, rect1= (may be calculated based on each location information
Figure 575353DEST_PATH_IMAGE001
) And a second truncation parameter Rect2= (g =: (b))
Figure 422086DEST_PATH_IMAGE002
). Specifically, the corresponding positioning information in the first interception parameter and the second interception parameter is determined according to each position information: (
Figure 288411DEST_PATH_IMAGE003
) And (a)
Figure 79649DEST_PATH_IMAGE004
) And determining corresponding size information in the first and second clipping parameters
Figure 689622DEST_PATH_IMAGE068
And
Figure 125283DEST_PATH_IMAGE074
Figure 162509DEST_PATH_IMAGE069
and
Figure 702030DEST_PATH_IMAGE048
. In one embodiment, the width parameter and the height parameter of the capture frame may be determined according to the first position information and the second position information, and then the first capture positioning information and the first capture size information and the second capture positioning information and the second capture size information may be calculated based on the first position information, the second position information, the width parameter, and the height parameter.
In one exemplary scenario, assume that the width parameter of the intercept box is
Figure 850114DEST_PATH_IMAGE075
The height parameter of the intercepting frame is
Figure 671440DEST_PATH_IMAGE076
Then Rect1=
Figure 941884DEST_PATH_IMAGE078
,Rect2=(
Figure 645398DEST_PATH_IMAGE058
). Further, the foregoing Rect1 and Rect2 can be respectively expressed as the following equations:
Figure 534857DEST_PATH_IMAGE079
Figure 86055DEST_PATH_IMAGE080
wherein in the above formula (1) and formula (2)
Figure 668346DEST_PATH_IMAGE009
And
Figure 859156DEST_PATH_IMAGE081
Figure 880201DEST_PATH_IMAGE082
and
Figure 50020DEST_PATH_IMAGE083
respectively representing the width and height of the first eye image and the width and height of the second eye image.
Next, at step 608, the first eye image and the second eye image are respectively clipped by using the above clipping parameters to obtain a first clipped image and a second clipped image, so as to optimize the image preview result. For more details of the optimization, reference may be made to the descriptions in fig. 2 to fig. 3, which are not repeated herein. Further, at step 610, the first captured image and the second captured image are respectively cut into an upper half area and a lower half area along the horizontal direction, and then at step 612, the upper half area of the first captured image and the lower half area of the second captured image are spliced to obtain a splicing effect map. Based on the obtained stitching effect map, at step 614, the stitching effect map may be displayed so that the operator can intuitively know the state of the fundus camera working distance.
Fig. 7 is a block diagram illustrating an exemplary configuration of an apparatus 700 for optimizing an image preview result of a fundus camera according to an embodiment of the present application. It will be appreciated that the device implementing aspects of the subject application may be a single device (e.g., a computing device) or a multifunction device including various peripheral devices.
As shown in fig. 7, the apparatus of the present application may include a central processing unit or central processing unit ("CPU") 711, which may be a general purpose CPU, a special purpose CPU, or other execution unit that processes and programs to run. Further, device 700 can also include a mass memory 712 and read only memory ("ROM") 713, wherein mass memory 712 can be configured to store various types of data, including various types of data related to the first and second eye images, algorithm data, intermediate results, and various programs needed to operate device 700. ROM 713 may be configured to store power-on self-test for the device 700, initialization of functional blocks in the system, drivers for basic input/output of the system, and data and instructions needed to boot the operating system.
Optionally, device 700 may also include other hardware platforms or components, such as the illustrated tensor processing unit ("TPU") 714, graphics processing unit ("GPU") 715, field programmable gate array ("FPGA") 716, and machine learning unit ("MLU") 717. It is understood that although various hardware platforms or components are shown in device 700, this is for illustrative purposes only and is not intended to be limiting, as appropriate hardware may be added or removed by those skilled in the art as may be required. For example, the device 700 may include only a CPU, an associated storage device, and an interface device to implement the method for optimizing the image preview result of the fundus camera of the present application.
In some embodiments, to facilitate the transfer and interaction of data with external networks, the device 700 of the present application further comprises a communication interface 718 such that it may be connected via the communication interface 718 to a local area network/wireless local area network ("LAN/WLAN") 705, which in turn may be connected via the LAN/WLAN to a local server 706 or to the Internet ("Internet") 707. Alternatively or additionally, the device 700 of the present application may also be directly connected to the internet or a cellular network based on wireless communication technology, such as based on 3 rd generation ("3G"), 4 th generation ("4G"), or 5 th generation ("5G") wireless communication technology, through the communication interface 718. In some application scenarios, the device 700 of the present application may also access the server 708 and database 709 of the external network as needed to obtain various known algorithms, data, and modules, and may remotely store various data, such as various types of data or instructions for rendering, for example, an eye image, a captured image, a stitched image, and the like.
The peripheral devices of the apparatus 700 may include a display device 702, an input device 703 and a data transmission interface 704. In one embodiment, the display device 702 may, for example, include one or more speakers and/or one or more visual displays configured for voice prompting and/or image video display of the image preview results of the optimized fundus camera of the present application. Input devices 703 may include other input buttons or controls, such as a keyboard, a mouse, a microphone, a gesture capture camera, etc., configured to receive input of audio data and/or user instructions. The data transfer interface 704 may include, for example, a serial interface, a parallel interface, or a universal serial bus interface ("USB"), a small computer system interface ("SCSI"), serial ATA, fireWire ("FireWire"), PCI Express, and high definition multimedia interface ("HDMI"), among others, configured for data transfer and interaction with other devices or systems. According to aspects of the present application, the data transfer interface 704 may receive eye images taken by two sub-cameras from a fundus camera and transmit eye images taken by two sub-cameras or various other types of data or results to the device 700.
The aforementioned CPU 711, mass storage 712, ROM 713, TPU 714, GPU 715, FPGA 716, MLU 717, and communication interface 718 of the device 700 of the present application may be interconnected via bus 719, and enable data interaction with peripheral devices via the bus. Through the bus 719, the cpu 711 may control other hardware components and their peripherals in the device 700, in one embodiment.
The apparatus for optimizing the image preview result of the fundus camera that can be used to carry out the present application is described above in connection with fig. 7. It is to be understood that the device structures or architectures herein are merely exemplary, and that the implementations and entities of the present application are not limited thereto but may be varied without departing from the spirit of the application.
From the above description in conjunction with the accompanying drawings, those skilled in the art will also appreciate that the embodiments of the present application can also be implemented by software programs. The present application thus also provides a computer program product. The computer program product may be used to implement the method for optimizing image preview results for a fundus camera as described herein in connection with fig. 1-6.
It should be noted that while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Rather, the steps depicted in the flowcharts may change order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
It should be understood that when the terms first, second, third, fourth, etc. are used in the claims of this application, in the description and in the drawings, they are used only to distinguish one object from another, and not to describe a particular order. The terms "comprises" and "comprising," when used in the specification and claims of this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the application. As used in the specification and claims of this application, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this application refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
Although the embodiments of the present application are described above, the descriptions are only examples for facilitating understanding of the present application and are not intended to limit the scope and application scenarios of the present application. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims.

Claims (10)

1. A method for optimizing an image preview result of a fundus camera, the fundus camera including at least two sub-cameras, and the two sub-cameras respectively capturing a first eye image and a second eye image, the method comprising:
acquiring first position information of the pupil center of the eye part in the first eye image and second position information of the pupil center of the eye part in the second eye image based on calibration parameters of the working distance of a fundus camera;
calculating a first interception parameter and a second interception parameter related to optimization according to the first position information and the second position information; and
and respectively intercepting the first eye image and the second eye image by using the first interception parameter and the second interception parameter so as to correspondingly obtain a first intercepted image taking the first position information as the center and a second intercepted image taking the second position information as the center, so as to optimize an image preview result of the fundus camera.
2. The method of claim 1, wherein the first truncation parameter includes first truncation positioning information and first truncation size information, wherein the second truncation parameter includes second truncation positioning information and second truncation size information, and wherein calculating the first and second truncation parameters associated with the optimization based on the first and second location information comprises:
determining a width parameter and a height parameter of an interception frame according to the first position information and the second position information; and
calculating the first truncated positioning information and the first truncated size information and the second truncated positioning information and the second truncated size information based on the first position information, the second position information, the width parameter, and the height parameter.
3. The method of claim 2, wherein determining a width parameter and a height parameter of an intercept box from the first location information and the second location information comprises:
respectively calculating the minimum distance from the pupil center to the vertical boundary of the first eye image and the minimum distance from the pupil center to the horizontal boundary of the first eye image according to the first position information;
respectively calculating the minimum distance from the pupil center to the vertical boundary of the second eye image and the minimum distance from the pupil center to the horizontal boundary of the second eye image according to the second position information; and
and determining the width parameter and the height parameter of the intercepting frame based on the minimum distance from the pupil center to the vertical boundary of each eye image and the minimum distance from the pupil center to the horizontal boundary of each eye image.
4. The method of claim 3, wherein determining the width parameter and the height parameter of the capture frame based on the minimum distance of the pupil center from the vertical boundary and the minimum distance from the horizontal boundary of each eye image comprises:
determining the minimum value between the minimum distance from the pupil center to the vertical boundary of the first eye image and the minimum distance from the vertical boundary of the second eye image as the width parameter of the intercepting frame; and
and determining the minimum value between the minimum distance from the pupil center to the horizontal boundary of the first eye image and the minimum distance from the horizontal boundary of the second eye image as the height parameter of the intercepting frame.
5. The method of claim 2, wherein calculating the first truncated position information and the first truncated size information and the second truncated position information and the second truncated size information based on the first position information, the second position information, the width parameter, and the height parameter comprises:
calculating the first truncated positioning information and the second truncated positioning information based on the first position information, the second position information, the width parameter and the height parameter, respectively; and
calculating the first and second truncation size information based on the width parameter and the height parameter, respectively.
6. The method of claim 5, wherein calculating the first truncated position information and the second truncated position information based on the first position information, the second position information, the width parameter, and the height parameter, respectively, comprises:
and moving the first position information and the second position information along the width and the longitudinal direction respectively by a corresponding width parameter and a corresponding height parameter to obtain the first interception positioning information and the second interception positioning information.
7. The method of claim 5, wherein calculating the first and second truncation size information based on the width parameter and the height parameter, respectively, comprises:
multiplying the width parameter and the height parameter by a preset multiple respectively to calculate the first cut-off size information and the second cut-off size information.
8. The method of claim 1, further comprising:
cropping the first and second cropped images into upper and lower half regions, respectively, in a horizontal direction in response to the fundus camera performing working distance alignment; and
and splicing the upper half area of the first intercepted image and the lower half area of the second intercepted image, so that a complete pupil is spliced after the working distances in the fundus camera are aligned, and an optimized image preview result is displayed.
9. An apparatus for optimizing an image preview result of a fundus camera, comprising:
a processor; and
a memory storing program instructions for optimizing an image preview result of a fundus camera, which when executed by the processor, cause the apparatus to implement the method of any one of claims 1-8.
10. A computer readable storage medium having stored thereon computer readable instructions for optimizing image preview results for a fundus camera, the computer readable instructions, when executed by one or more processors, implementing the method of any one of claims 1-8.
CN202211269310.4A 2022-10-17 2022-10-17 Method for optimizing image preview results of fundus camera and related product Active CN115379121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211269310.4A CN115379121B (en) 2022-10-17 2022-10-17 Method for optimizing image preview results of fundus camera and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211269310.4A CN115379121B (en) 2022-10-17 2022-10-17 Method for optimizing image preview results of fundus camera and related product

Publications (2)

Publication Number Publication Date
CN115379121A true CN115379121A (en) 2022-11-22
CN115379121B CN115379121B (en) 2022-12-20

Family

ID=84074001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211269310.4A Active CN115379121B (en) 2022-10-17 2022-10-17 Method for optimizing image preview results of fundus camera and related product

Country Status (1)

Country Link
CN (1) CN115379121B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103491857A (en) * 2011-04-27 2014-01-01 卡尔蔡司医疗技术股份公司 Systems and methods for improved ophthalmic imaging
US20150272432A1 (en) * 2014-03-31 2015-10-01 Nidek Co., Ltd. Ophthalmic photography device, ophthalmic photography method, and ophthalmic photography program
CN110604543A (en) * 2018-06-15 2019-12-24 株式会社拓普康 Ophthalmic device
CN111449620A (en) * 2020-04-30 2020-07-28 上海美沃精密仪器股份有限公司 Full-automatic fundus camera and automatic photographing method thereof
CN113572964A (en) * 2021-08-04 2021-10-29 上海传英信息技术有限公司 Image processing method, mobile terminal and storage medium
CN114972462A (en) * 2022-07-27 2022-08-30 北京鹰瞳科技发展股份有限公司 Method for optimizing working distance alignment effect of fundus camera and related product

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103491857A (en) * 2011-04-27 2014-01-01 卡尔蔡司医疗技术股份公司 Systems and methods for improved ophthalmic imaging
US20150272432A1 (en) * 2014-03-31 2015-10-01 Nidek Co., Ltd. Ophthalmic photography device, ophthalmic photography method, and ophthalmic photography program
CN110604543A (en) * 2018-06-15 2019-12-24 株式会社拓普康 Ophthalmic device
CN111449620A (en) * 2020-04-30 2020-07-28 上海美沃精密仪器股份有限公司 Full-automatic fundus camera and automatic photographing method thereof
CN113572964A (en) * 2021-08-04 2021-10-29 上海传英信息技术有限公司 Image processing method, mobile terminal and storage medium
CN114972462A (en) * 2022-07-27 2022-08-30 北京鹰瞳科技发展股份有限公司 Method for optimizing working distance alignment effect of fundus camera and related product

Also Published As

Publication number Publication date
CN115379121B (en) 2022-12-20

Similar Documents

Publication Publication Date Title
US10609282B2 (en) Wide-area image acquiring method and apparatus
US11164323B2 (en) Method for obtaining image tracking points and device and storage medium thereof
CN109104596B (en) Projection system and correction method of display image
US9875547B2 (en) Method and apparatus for adjusting stereoscopic image parallax
CN110809786B (en) Calibration device, calibration chart, chart pattern generation device, and calibration method
US8854359B2 (en) Image processing apparatus, image processing method, storage medium, and image processing system
JP4512584B2 (en) Panorama video providing method and apparatus with improved image matching speed and blending method
US10347048B2 (en) Controlling a display of a head-mounted display device
EP3067866A1 (en) Method and device for converting virtual view into stereoscopic view
US10467770B2 (en) Computer program for calibration of a head-mounted display device and head-mounted display device using the computer program for calibration of a head-mounted display device
US20150269760A1 (en) Display control method and system
US10643334B2 (en) Image presentation control methods and image presentation control apparatuses
EP2015248B1 (en) Method, program and apparatus for correcting a distortion of an image
WO2019052534A1 (en) Image stitching method and device, and storage medium
WO2020140758A1 (en) Image display method, image processing method, and related devices
TW201342306A (en) Image processing device, image processing method, program for image processing device, and image display device
US9959841B2 (en) Image presentation control methods and image presentation control apparatuses
US8965105B2 (en) Image processing device and method
US20230025058A1 (en) Image rectification method and device, and electronic system
KR20200093004A (en) Method and system for testing wearable devices
US20240051475A1 (en) Display adjustment method and apparatus
CN108282650B (en) Naked eye three-dimensional display method, device and system and storage medium
CN115379121B (en) Method for optimizing image preview results of fundus camera and related product
JP2012222664A (en) On-vehicle camera system
CN112172671B (en) Method and device for displaying rear view image of commercial vehicle, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant