CN117149032A - Visual field expansion method, electronic device, and storage medium - Google Patents

Visual field expansion method, electronic device, and storage medium Download PDF

Info

Publication number
CN117149032A
CN117149032A CN202210563486.4A CN202210563486A CN117149032A CN 117149032 A CN117149032 A CN 117149032A CN 202210563486 A CN202210563486 A CN 202210563486A CN 117149032 A CN117149032 A CN 117149032A
Authority
CN
China
Prior art keywords
image
view
visual field
field
amplified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210563486.4A
Other languages
Chinese (zh)
Inventor
梁震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN202210563486.4A priority Critical patent/CN117149032A/en
Priority to PCT/CN2023/083433 priority patent/WO2023226570A1/en
Publication of CN117149032A publication Critical patent/CN117149032A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/045Zooming at least part of an image, i.e. enlarging it or shrinking it

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses a visual field amplification method, electronic equipment and a storage medium, wherein the method comprises the following steps: the main visual field information of the target object is acquired, the main visual field information comprises at least one of a main visual field image and a main visual field direction, so that the embodiment of the invention can perform visual field amplification according to the main visual field image or the main visual field direction, then acquire an amplification visual field range to be achieved and an ambient environment image of the target object position, finally acquire and display an amplification visual field image with continuous content according to the main visual field information, the ambient environment image and the amplification visual field range, and the display proportion of the main visual field image is different from the display proportion of the amplification visual field image. The invention can effectively improve the continuity of the view of the amplified view, and can generate continuous amplified view images by detecting the position of the main view and amplifying the view based on the main view information and the surrounding environment images, so that a user can directly see the contents of more blind areas, and the potential safety hazard caused by the existence of the blind areas is reduced.

Description

Visual field expansion method, electronic device, and storage medium
Technical Field
The present invention relates to the field of display technologies, but not limited to, and in particular, to a field of view expansion method, an electronic device, and a storage medium.
Background
In the security field and the display field, field of view expansion is often required. For example, in traffic safety, an optical rearview mirror, an electronic rearview mirror, and the like are generally adopted, and at a specific position of a natural visual field, a device for enabling a user to observe other blind areas (such as side and rear) outside the natural visual field is arranged, so that the visual field expansion is realized, and when the user specially turns around to observe the content of the device, image information of the area is provided for the user.
However, the extended view provided in the related art is two independent parts of the original view of the user, and there is discontinuous transition and conversion of the observed content between the original view and the auxiliary view, so that the user's observation lacks continuity, the user's viewing experience is poor, and serious potential safety hazards are easily caused.
Disclosure of Invention
The embodiment of the invention provides a visual field amplification method, electronic equipment and a storage medium, which can effectively improve the continuity of amplified visual field viewing.
In a first aspect, an embodiment of the present invention provides a field of view amplification method, including: acquiring main visual field information of a target object, wherein the main visual field information comprises at least one of a main visual field image or a main visual field direction; acquiring a surrounding environment image of the expanded visual field range and the target object position; and obtaining an amplified view field image with continuous content according to the main view field information, the surrounding environment image and the amplified view field range, wherein the amplified view field image comprises the main view field image and the auxiliary view field image, and the display proportion of the main view field image is different from the display proportion of the amplified view field image.
In a second aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor, the memory storing a computer program, the processor implementing the field of view augmentation method according to any one of the embodiments of the first aspect of the present invention when executing the computer program.
In a third aspect, an embodiment of the present invention provides a computer-readable storage medium storing a program that is executed by a processor to implement the field of view augmentation method according to any one of the embodiments of the first aspect of the present invention.
The embodiment of the invention at least comprises the following beneficial effects: according to the visual field amplification method, the main visual field information of the target object is obtained, the main visual field information comprises at least one of a main visual field image and a main visual field direction, so that the visual field amplification can be carried out according to the main visual field image or the main visual field direction, then the to-be-achieved amplification visual field range and the surrounding environment image of the target object position are obtained, finally the content continuous amplification visual field image is obtained according to the main visual field information, the surrounding environment image and the amplification visual field range, the amplification visual field image comprises the main visual field image and the auxiliary visual field image, the display proportion of the main visual field image is different from the display proportion of the amplification visual field image, the continuity of the amplification visual field can be effectively improved, the visual field is amplified based on the main visual field information and the surrounding environment image through the main visual field position detection, the continuous amplification visual field image can be generated, a user can directly see the content of more blind areas, and the safety hidden danger caused by the existence of the blind areas is reduced.
Drawings
FIG. 1 is a flow chart of a field of view amplification method according to one embodiment of the present invention;
FIG. 2 is a schematic representation of field of view amplification provided by one embodiment of the present invention;
FIG. 3 is a flow chart of a visual field amplification method according to another embodiment of the present invention;
FIG. 4 is a schematic diagram of contrast of the view expansion viewing angle provided by one embodiment of the present invention;
FIG. 5 is a flow chart of a visual field amplification method according to another embodiment of the present invention;
FIG. 6 is a flow chart of a visual field amplification method according to another embodiment of the present invention;
FIG. 7 is a flow chart of a visual field amplification method according to another embodiment of the present invention;
FIG. 8 is a flow chart of a field of view amplification method according to another embodiment of the present invention;
FIG. 9 is a schematic view of field amplification prior to modulated compression provided by one embodiment of the present invention;
FIG. 10 is a schematic view of field amplification after modulated compression provided by one embodiment of the present invention;
FIG. 11 is a flow chart of a visual field amplification method according to another embodiment of the present invention;
FIG. 12 is a functional image schematic of an adjustment function provided by one embodiment of the present invention;
FIG. 13 is a flow chart of a field of view amplification method according to another embodiment of the present invention;
FIG. 14 is a flow chart of a field of view amplification method according to another embodiment of the present invention;
FIG. 15 is a flow chart of a field of view amplification method according to another embodiment of the present invention;
FIG. 16 is a schematic diagram of a first person perspective mode provided by one embodiment of the present invention;
FIG. 17 is a schematic diagram of a third person viewing angle mode according to an embodiment of the present invention;
FIG. 18 is a schematic diagram of a visual field amplification system according to one embodiment of the present invention;
fig. 19 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the description of the present invention, it should be understood that references to orientation descriptions such as upper, lower, front, rear, left, right, etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the embodiments of the present invention.
It should be understood that in the description of the embodiments of the present invention, the meaning of several is more than one, the meaning of plural (or multiple) is more than two, and that greater than, less than, exceeding, etc. are understood to not include the present number, and that greater than, less than, within, etc. are understood to include the present number. If any, the terms "first," "second," etc. are used for distinguishing between technical features only, and should not be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the embodiments of the present invention, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly, and those skilled in the art may reasonably determine the specific meaning of the terms in the embodiments of the present invention in combination with the specific contents of the technical solutions.
In the embodiment of the invention, the network technology such as the distributed point-to-point transmission and the like is taken as a network foundation by taking 5G and the high bandwidth and low delay of the next generation communication electronic technology as the technical foundation, and the near-eye display equipment, the earphone, the pickup equipment and the like are taken as the technical foundation, so that a new 'auxiliary system' concept is provided, the real-time field of special staff or common users can be improved, and the safety and the working effect in the driving and dangerous operation process can be improved.
The physiological characteristics of human eye vision are as follows, the normal eye vision is 120 degrees for horizontal vision and 60 degrees for vertical vision, and the resolution of 0.003 radian (1') is within this range, and according to research, the human eye is about composed of 500 ten thousand nerve cells, wherein about 100 ten thousand nerve cells which cannot recognize colors are formed, so the imaging capability of the human eye is about 400 ten thousand pixels. And wherein there is a 1 'resolution (1.0 vision) in the vicinity of the fovea of the macula, after exiting the fovea, the vision drops to a 10' resolution (0.1 vision). Thus the single eye field of view is approximately 150 deg. horizontally, with only the middle 10 deg. being 1' resolution and the other 140 deg. being only 10 minutes resolution. Therefore, in the human eye physiological characteristic visual range, the size of the angular field occupied by compressing the content in the area except the main field of view is a certain physiological basis in the scenes of specific driving, industrial operation and the like.
It will be appreciated that the range of human vision, perception, hearing and the like is limited, and some dangers caused by insufficient perception range are often encountered, in the past, the problem is not solved, and can only be partially solved by means of exercise perception, reaction speed and the like or by matching with other tool regulations, or the problem cannot be solved, so that how to improve the observation capability of a user, enlarge the visual field, reduce the potential safety hazard is the key point of research.
Due to the characteristics of high bandwidth and low delay of the 5G and the next generation wireless communication technology and the improvement of the current near-to-eye display technology, the application can partially or completely replace the perception capability of the human visual perception system to the outside from a natural state to an artificial enhancement state, thereby obtaining stronger perception capability than the natural state. And by utilizing the enhancement, the purposes of improving the safety and the efficiency are achieved.
Based on the above, the embodiment of the application provides a visual field amplification method, electronic equipment and a storage medium, which can effectively improve the continuity of amplified visual field viewing, and can generate continuous amplified visual field images by detecting the position of a main visual field and performing visual field amplification based on main visual field information and surrounding environment images, so that a user can directly see the contents of more blind areas, and the potential safety hazard caused by the existence of the blind areas is reduced.
The following is a detailed description.
An embodiment of the present application provides a field of view amplification method, and referring to fig. 1, the field of view amplification method in the embodiment of the present application includes, but is not limited to, steps S101 to S103.
In step S101, main visual field information of the target object is acquired, the main visual field information including at least one of a main visual field image and a main visual field direction.
Step S102, obtaining surrounding environment images of the expanded visual field range and the target object position.
And step S103, obtaining an amplified visual field image with continuous content according to the main visual field information, the surrounding environment image and the amplified visual field range, wherein the amplified visual field image comprises a main visual field image and an auxiliary visual field image, and the display proportion of the main visual field image is different from that of the amplified visual field image.
It should be noted that, the field of view amplification method in the embodiment of the present invention may be applied to a terminal such as a mobile phone, AR glasses, VR glasses or a helmet, a vehicle-mounted device, and the like, by executing the field of view amplification method on the terminal, first, the main field of view information of the target object is acquired, generally, the field of view is divided into a main field of view and a residual field of view, the main field of view information includes at least one of a main field of view image or a main field of view direction, the main field of view information represents the field center of the target object, the resolution requirement of the area is higher, the main field of view image is an image of the field of view center of the target object, the main field of view direction is a direction of the field of view center, the embodiment of the present invention may perform field of view amplification according to the main field of view image or the main field of view direction, the target object may be a user or a previous device where the terminal is located, for example, when the terminal is a mobile phone, AR glasses or VR glasses, the target object is a user, thus, the main visual field information of the user is obtained, and when the terminal is a vehicle-mounted device, such as a vehicle-mounted display, and the target object is an automobile in which the vehicle-mounted display is located, thus obtaining the main visual field information of the automobile, the embodiment of the invention takes the target object as an example, then obtains the expansion visual field range to be achieved and the surrounding image of the target object position, and the surrounding image can represent the surrounding of the user, it is understood that the visual field range of the surrounding image is larger than the visual field range observed by the user, finally, the continuous expansion visual field image is obtained and displayed according to the main visual field information, the surrounding image and the expansion visual field range, namely, at least one of the main visual field image or the main visual field direction, the surrounding image and the expansion visual field range, the expansion visual field image comprises the main visual field image and the auxiliary visual field image, the auxiliary view image is an auxiliary view determined from the surrounding image according to the main view image or the main view direction, and the display ratio of the main view image is different from the display ratio of the amplified view image so that the user can see the amplified view image with wider view in the range of the original limited main view.
Referring to fig. 2, the embodiment of the invention can effectively improve the continuity of view of the amplified visual field, and can generate continuous amplified visual field images by detecting the position of the main visual field and performing visual field amplification based on the main visual field information and the surrounding environment images, so that a user can directly see the contents of more blind areas, and the potential safety hazard caused by the existence of the blind areas is reduced.
It may be understood that, in the embodiment of the present invention, the terminal may acquire the main view information of the target object by setting the camera, and acquire the surrounding image by setting the camera, and may also acquire the surrounding image by using an external device, for example, the terminal may be connected to an external camera or a sensor, and acquire the main view information or the surrounding image of the target object by using the external camera and the sensor, and send the obtained main view information or the surrounding image to the terminal, so that the terminal processes the obtained amplified view image.
It can be understood that, in the visual field amplification method in the embodiment of the invention, the applied terminal can realize high-bandwidth and low-delay signal transmission through 5G, so that the efficiency of data transmission can be improved, and the efficiency of generating the amplified visual field image is ensured.
Specifically, the user completely presents the modified environment of the user in the field of view of the user by wearing augmented reality glasses or other display modes capable of replacing the current visual perception content, and replaces the original visual content of the user. For example, by wearing AR glasses, the visual field expansion method in the embodiment of the present invention is performed, the user sees an expanded visual field, for example, the original natural visual field is in the range of 120-150 degrees, by reducing the wide-angle content acquired in real time and displaying the reduced wide-angle content in the natural visual field of the user, the user can directly see the scene and the object in the angle of 180 degrees or more without turning around or turning around the glasses, any action of the user, such as hand action, is fed back in real time in front of the eyes due to the low delay of the device, and based on the physiological characteristics of strong adaptability of the human brain, after the user wears the display enhancement device for a short period of time (under the condition of several minutes to several days), the effect of replacing natural visual field can be achieved, and the user can accurately point and take the target object placed outside the natural visual field according to the enhanced visual field, thereby achieving the effect of enhancing the perception capability of the user. For example, by wearing AR glasses, the user can switch the converted visual field to a third viewing angle, under which the user can observe not only the content in front of the body but also the own back brain spoon (or the synthetic image of the whole body) and the scene and object behind the body, and under the perception mode, the user can work in a dangerous working environment requiring very much attention to the surrounding environment after adapting to avoid the danger source behind the body and the danger source contacting behind the body when the human body moves.
Referring to fig. 3, in an embodiment, the main view information includes a main view image, and the step S103 may further include, but is not limited to, step S201 and step S202.
In step S201, an auxiliary field image is determined from the surrounding image based on the main field image and the enlarged field range, and the content of the auxiliary field image is continuously transited to the content of the main field image.
Step S202, performing image fusion on the auxiliary view image and the main view image to obtain an amplified view image with continuous content.
In the embodiment of the invention, the amplified view field image with continuous content is obtained according to the main view field image, the surrounding environment image and the amplified view field range, specifically, the auxiliary view field content and the range to be displayed are determined from the surrounding environment image according to the main view field image and the amplified view field range, the auxiliary view field image is obtained, the content of the auxiliary view field image is continuously transited with the content of the main view field image, and then the auxiliary view field image and the original main view field image are subjected to image fusion, so that the amplified view field image with continuous content can be obtained.
Referring to fig. 4, it may be understood that, since the original main view image of the target object has a limited view range, the view expansion needs to be performed, and the acquired surrounding image includes a larger range, the auxiliary view image needs to be selected as the auxiliary view image according to the main view image, in an embodiment, the auxiliary view image is determined from the surrounding image according to the main view image and the expansion view range, the surrounding image may include the main view image, or include the image around the view angle of the main view image, the expansion view range indicates an expansion range that the target object needs to reach, for example, the surrounding image may be an image within a view range of 180 degrees or 360 degrees around the target object, the main view image is an image within a view range of current 120 degrees of the target object, and if the expansion view range is 150 degrees, the image with 30 degrees more view angles corresponding to the main view image is selected as the auxiliary view image from the surrounding image, and may be the image with 15 degrees left, 30 degrees left or 30 degrees right, respectively, and the expansion view range may be continuously fused with the main view image, so as to obtain the fused content.
Referring to fig. 5, in an embodiment, the main view information includes a main view direction, and the step S103 may further include, but is not limited to, step S301 and step S302.
Step S301, determining an amplified view direction from the surrounding image according to the main view direction.
Step S302, obtaining an amplified visual field image with continuous content from the surrounding environment image according to the amplified visual field direction and the amplified visual field range, wherein the amplified visual field image comprises a main visual field image and an auxiliary visual field image.
In the embodiment of the present invention, the amplified view image with continuous content is obtained according to the main view direction, the surrounding environment image and the amplified view range, specifically, the view content and the range to be displayed are determined from the surrounding environment image according to the main view direction and the amplified view range, so that the amplified view image with continuous content can be obtained, the amplified view image includes the main view image and the auxiliary view image, the main view image in the embodiment of the present invention is actually an image in the view range in the original main view direction of the user, and the auxiliary view image is an auxiliary view image selected by the user from the surrounding environment image according to the main view direction, so that the amplified view image determined from the surrounding environment image according to the main view direction includes the original main view image and the auxiliary view image outside the original view range.
It may be understood that, in the embodiment of the present invention, the direction in which the field of view is to be amplified may be determined according to the target object, and then an amplified field of view image with continuous content may be obtained from the surrounding environment image, where the obtained amplified field of view image is larger than the main field of view image that can be obtained by the original target object, for example, the original target object may only obtain an image within the current 120-degree field of view as the main field of view image.
Referring to fig. 6, in an embodiment, the main view information includes a main view image, and the step S202 may further include, but is not limited to, step S401 and step S402.
Step S401, fusing the auxiliary view image around the main view image to obtain a preprocessed image.
Step S402, compressing the preprocessed image to maintain the display proportion of the central visual field of the preprocessed image and reduce the display proportion of the peripheral visual field, and obtaining the continuous amplified visual field image of the compressed content.
In the process of obtaining the amplified view image by fusing the main view image and the auxiliary view image, the embodiment of the invention fuses the auxiliary view image around the main view image to obtain the preprocessed image, and because the content of the obtained auxiliary view image is continuously transited with the content of the main view image, the auxiliary view image needs to be placed at the corresponding position around the main view image, which can be at least one of the left side, the right side, the upper side and the lower side, for example, if the auxiliary view image is continuously transited at the left side of the main view image, the auxiliary view image is placed at the left side of the main view image to obtain the preliminarily fused preprocessed image, and then the embodiment of the invention compresses the preprocessed image to keep the display proportion of the central view of the preprocessed image and reduce the display proportion of the peripheral view, so as to obtain the amplified view image with continuous compressed content.
It should be noted that, in the embodiment of the present invention, the image outside the original view is brought into the view at the cost of reducing the angular resolution, that is, the auxiliary view image is fused with the original main view image, and when the size of the display area watched by the target object is unchanged, the addition of the image to fuse tends to reduce the angular resolution of the image, for example, if the terminal is AR glasses, only the image with 120-degree view angle range can be displayed in the original display screen, the image with 150-degree view angle range is required to be put into the original display screen, the compression processing is required, the angular resolution refers to the resolution of one component of the imaging system or the system, and the reduced view image is considered to additionally increase the visible range, but reduce the resolution of the central visual perception area of the main view.
Referring to fig. 7, in an embodiment, the main view information includes a main view direction, and the step S302 may further include, but is not limited to, step S501 and step S502.
Step S501, obtaining a preprocessing image from the surrounding environment image according to the amplified view direction and the amplified view range.
Step S502, compressing the preprocessed image to maintain the display proportion of the central visual field of the preprocessed image and reduce the display proportion of the peripheral visual field, and obtaining the continuous amplified visual field image of the compressed content.
In the process of obtaining the amplified visual field image from the surrounding environment image according to the main visual field direction, the embodiment of the invention firstly amplifies the visual field direction and the amplified visual field range, obtains the preprocessed image from the surrounding environment image, and according to the description in the embodiment, if the visual field which can be watched by the original target object is only 120 degrees, the visual field is amplified at present, so as to obtain a preprocessed image with a visual field range of 150 degrees, and then the preprocessed image is compressed to a viewable display size range to obtain the amplified visual field image.
It should be noted that, in the embodiment of the present invention, the image outside the original field of view is brought into the field of view at the cost of reducing the angular resolution, that is, the amplified field of view image including the image outside the original field of view replaces the original main field of view image, and under the condition that the size of the display area watched by the target object is unchanged, the addition of the image to fuse tends to cause the reduction of the angular resolution of the image, for example, if the terminal is AR glasses, the image in the 120-degree viewing angle range can only be displayed in the original display screen, the image in the 150-degree viewing angle range is to be put into the original display screen, compression processing is required, the angular resolution refers to the resolution capability of an imaging system or a component of the system, and the angular resolution of the central visual perception area of the main field of view is reduced in consideration of the reduced field of view image, so that the embodiment of the present invention further improves the display proportion of the central field of view of the preprocessed image and reduces the display proportion of the peripheral field of view, and at the same time, more contents are brought into the visual range, the compressed amplified field of view image is continuously, and the final user still has no influence on the visibility of the central viewing area in the certain viewing range.
Referring to fig. 8, in an embodiment, the step S402 and/or the step S502 may further include, but are not limited to, step S601 and step S602.
Step S601, an adjustment function for performing the view compression adjustment is acquired.
Step S602, non-uniformly compressing the preprocessed image according to the adjusting function to maintain the display proportion of the central visual field of the preprocessed image and reduce the display proportion of the peripheral visual field, so as to obtain the continuous amplified visual field image with compressed content.
In the embodiment of the present invention, whether the amplified field image is obtained by fusing the main field image and the auxiliary field image, or the amplified field image is obtained from the surrounding image according to the main field direction, the preprocessed image needs to be compressed to obtain the final amplified field image. Specifically, in the processing of the preprocessed image, the embodiment of the invention performs image processing through the adjusting function, firstly acquires the adjusting function for performing the compression adjustment of the visual field, and performs non-uniform compression on the preprocessed image according to the adjusting function so as to maintain the display proportion of the central visual field of the preprocessed image and reduce the display proportion of the peripheral visual field, thereby obtaining the amplified visual field image with continuous compressed content.
It will be appreciated that, considering the reduced view image, although the visible range is additionally increased, the angular resolution of the central visual perception area of the main view is reduced, so that by improving the compression mode, the embodiment of the invention compresses the angular resolution of the image in a non-uniform manner, and while maintaining the central view proportion, more content is included in the visible range, the embodiment of the invention processes the preprocessed image by adjusting the function, and the image is displayed in a manner of gradually reducing the display proportion to both sides by adjusting the function, so that the display proportion of the central view of the preprocessed image can be maintained and the display proportion of the peripheral view can be reduced, and the end user can ensure a certain viewing definition within the central range of the view without affecting the viewing experience of the user.
Referring to fig. 9 and 10, taking a target object as an example, two lines AE are located in an auxiliary view of a user, two lines BD are located in a main view of the user, a line segment C is located in a central line of the user view, and during conventional compression, as shown in fig. 9, the auxiliary view AE is compressed to a position a ' E ', the BD in the original main view is compressed to a position B ' D ', the angle B ' in the original main view is equal to ++α after compression, after the compression of the video field is adjusted, the display proportion of the central view is maintained and the display proportion of the peripheral view is reduced in a non-uniform compression manner, the angle between B ' D ' is larger than the angle β in the original fig. 7 after non-uniform compression, and as shown in fig. 10, the display proportion of the central view is maintained, so that ++β is larger than ++α, and the final user can guarantee a certain viewing experience in the central range of the visual field without affecting the clear viewing experience.
Referring to fig. 11, in an embodiment, the adjustment function includes a first piecewise function for maintaining the display scale of the image and a second piecewise function for reducing the display scale of the image, and the step S602 may further include, but is not limited to, steps S701 to S705.
In step S701, a first display range of the central view is acquired.
In step S702, a second display range of the surrounding field of view is acquired.
In step S703, the preprocessed image in the first display range is processed according to the first piecewise function to maintain the display scale of the central view of the preprocessed image.
In step S704, non-uniform compression processing is performed on the preprocessed image in the second display range according to the second segmentation function, so as to reduce the display scale of the field of view around the preprocessed image.
Step S705, determining the preprocessed image after the processing as an amplified field image.
Further, the adjusting function in the embodiment of the present invention is divided into two parts, including a first piecewise function and a second piecewise function, where the first piecewise function is used to maintain the display proportion of the image, the second piecewise function is used to reduce the display proportion of the image, in the process of processing the preprocessed image by the adjusting function, a first display range of a central view is acquired first, and a second display range of a surrounding view is acquired, where the central view is a view that needs to maintain a certain viewing clarity, in this part, the embodiment of the present invention needs to ensure the display proportion thereof, and the resolution requirement of the user on the surrounding view is generally lower, so in this part, the embodiment of the present invention needs to finally ensure that the fused image meets the required size by reducing the display proportion thereof, therefore, the embodiment of the present invention processes the preprocessed image in the first display range according to the first piecewise function, so as to maintain the display proportion of the central view of the preprocessed image, and performs non-uniform compression processing on the preprocessed image in the second display range according to the second piecewise function so as to reduce the display proportion of the surrounding of the preprocessed image, and finally determines the preprocessed image as the preprocessed image after processing is the preprocessed image.
It may be understood that the first display range and the second display range may be set by a user, the peripheral view may be a view located at a left and right portion of the view, or a view located at an upper and lower portion of the view.
It should be noted that, referring to fig. 12, the first piecewise function mentioned in the embodiment of the present invention is f=1.0, so that it is possible to process images at a display ratio of 1.0 within a specific distance, and after the second piecewise function processes the preprocessed image, an image with a smaller display ratio as the distance from the center of the image is longer can be obtained, and the image is processed with F Nonlinear function For the second piecewise function, one can get:
S Display scale =D Distance of display position from center of field of view *F Nonlinear function (1)
Wherein S is Display scale Representing the display scale of the image, representing the distance of the display position from the center of the field of view of the preprocessed image, it will be appreciated that in the embodiment of the inventionThe nonlinear function is used as the second piecewise function, and the second piecewise function can also be a linearly decreasing function, which is not particularly limited in the embodiment of the present invention.
In one embodiment, the second segmentation function may be:
F nonlinear function =cox(x 2 ) (2)
The formula (2) may have a decreasing trend within a certain range, and the faster the decreasing trend along with the increasing distance, the display proportion of the peripheral field of view may be adjusted at the corresponding position, and the second segmentation function may also be other formulas, which is not particularly limited herein.
It should be noted that, in the embodiment of the present invention, the second segmentation function may be set according to actual needs, so as to satisfy control of different display proportions.
Referring to fig. 13, in an embodiment, the step S201 may further include, but is not limited to, step S801 to step S804.
In step S801, the center of the main field of view of the target object is acquired.
Step S802, determining a main field of view range of the target object according to the main field of view image.
Step S803, an amplified field of view center is determined from the surrounding image based on the main field of view center.
Step S804, taking the center of the expanded view as the center position in the surrounding image, and determining the auxiliary view image from the surrounding image according to the main view range and the expanded view range.
In the embodiment of the invention, in the process of determining the auxiliary view image from the surrounding environment image according to the main view image and the expansion view range, the auxiliary view image is determined from the surrounding environment image according to the main view center and the main view range, specifically, the embodiment of the invention firstly obtains the main view center of the target object, determines the main view range of the target object according to the main view image, the main view center is the view center of the target object, the main view range indicates the largest view range which can be obtained in the main view of the target object, then determines the expansion view center from the surrounding environment image according to the main view center, and compares the expansion view center as the center position in the surrounding environment image according to the main view range and the expansion view range, and determines the image between the expansion view range and the main view range in the surrounding environment image as the auxiliary view image.
It will be appreciated that the center of the primary field of view may be derived from the user's gaze. The main view center may be obtained from an obtained main view image, in an embodiment, the obtained main view information may change along with movement of a target object, for example, when the target object is a user and the applied terminal is an AR device, after the user's head moves, the observed main view image may rotate along with rotation of the user's head, and when the target object is an automobile, the terminal is an on-board device, and the main view image at this time is an image corresponding to the front of the automobile. The main view center can also be obtained by analyzing the characteristic information of the target object, for example, when the target object is a user, the position of the pupil of the user watched in the screen is determined and obtained by obtaining the eye characteristic information of the user, and the position is determined to be the main view center, so that the efficiency of gaze communication in a remote video communication scene can be effectively improved.
Referring to fig. 14, in an embodiment, the step S103 may further include, but is not limited to, steps S901 to S903.
Step S901, obtain updated main view information.
In step S902, an updated surrounding image of the location of the target object in real time is acquired.
Step S903, updating the enlarged view image based on the updated main view information, the updated surrounding image, and the enlarged view range.
It should be noted that, in the field of view amplification method in the embodiment of the present invention, the amplified field of view image may be updated in real time, specifically, by acquiring real-time updated main field of view information, including acquiring at least one of a real-time updated main field of view image and a main field of view direction, since the position of the target object changes at any time, the main field of view information is changed in real time, on the basis of which, an ambient environment image of the position where the updated target object is located is acquired in real time, and the amplified field of view image is updated according to the updated main field of view information, the updated ambient environment image and the amplified field of view range.
In an embodiment, the surrounding image is a panoramic view image centered on the target object, and the step S103 may further include, but is not limited to, the following steps:
and obtaining an expanded view image with continuous content according to the main view information, the panoramic view image and the expanded view range.
It should be noted that, in the embodiment of the present invention, the acquired surrounding environment image is a panoramic view image with the target object as the center, the panoramic view image is acquired, so that the view adjustment can be performed to a greater extent, the size of the view range of the enlarged view image is conveniently selected according to the needs, and the main view image is also included, so that the sudden fusion and the selection are conveniently performed, a panoramic camera can be arranged on the terminal, and the panoramic image is shot in real time through the panoramic camera, for example, when the target object is a user, and the terminal is an AR glasses, the panoramic camera on the AR glasses can acquire 360-degree panoramic images around the user in real time, and when the target object is an automobile, the vehicle-mounted device can acquire 360-degree panoramic images around the automobile in real time through the panoramic camera connected to the automobile.
Referring to fig. 15, in an embodiment, the step S103 may further include, but is not limited to, steps S1001 to S1003.
In step S1001, viewing angle selection information of the target object is acquired.
Step S1002, a first person viewing angle mode or a third person viewing angle mode for viewing in the target object field of view augmentation is determined according to the viewing angle selection information.
Step S1003, obtaining an enlarged view image in the first person viewing angle mode or the third person viewing angle mode according to the main view information, the surrounding image and the enlarged view range.
It should be noted that, in the embodiment of the present invention, the switching of the viewing angle may be implemented, by acquiring the viewing angle selection information of the target object, the first person viewing angle or the third person viewing angle in the switching of the viewing angle may be determined, and then, the first person viewing angle mode or the third person viewing angle mode watched in the field of view augmentation of the target object may be determined according to the viewing angle selection information, the target object is taken as a user, the terminal is taken as an AR glasses as an example, the first person viewing angle is a viewing angle that the user can observe through eyes under normal conditions, the first person viewing angle is a viewing angle of the user, and has a stronger substitution sense, as shown in fig. 16, in the first person viewing angle mode, the augmented view image may replace the effect of natural vision, and the user may be able to accurately point to and take the target object placed outside the natural vision range according to the enhanced vision, thereby achieving the effect of enhanced perception. The third person viewing angle is a third person viewing angle, that is, a third person viewing angle is that a user and a third party other than the user can observe from the angle of the third person, and can bring more information, as shown in fig. 17, in the third person viewing angle mode, the field of view watched by the user is a converted field of view, and the third person viewing angle can be used for working in dangerous working environments needing to pay close attention to surrounding environments, and avoiding contacting with a dangerous source behind a person when the person moves.
The embodiment of the invention also provides a visual field augmentation system, as shown in fig. 18, the visual field augmentation system comprises a full-visual field acquisition subsystem, a display subsystem, an input subsystem and a control subsystem, wherein the subsystems work cooperatively to complete the whole processes of the acquisition of the extended visual field, the fusion of the extended visual field and the display of the extended visual field, the input subsystem is responsible for processing the detection of the operation actions of a conventional target object, such as the rotation related actions of eyeballs of the user and the rotation of heads when the target object is a user, the full-visual field acquisition subsystem is responsible for acquiring the image information of the surrounding environment concerned by the target object, for example, a panoramic camera is used for shooting 360-degree panoramic images in real time, the display subsystem is responsible for displaying the fused enhanced visual field images at the positions required by the target object according to the execution of the control subsystem, the control subsystem is responsible for inputting the information detected by the input subsystem, the panoramic images acquired by the panoramic field acquisition subsystem are responsible for controlling the display subsystem to display proper fusion images at the proper positions according to the visual field fusion method.
Fig. 19 shows an electronic device 100 provided by an embodiment of the present invention. The electronic device 100 includes: a processor 101, a memory 102, and a computer program stored on the memory 102 and executable on the processor 101, the computer program when executed being configured to perform the field of view augmentation method described above.
The processor 101 and the memory 102 may be connected by a bus or other means.
The memory 102, as a non-transitory computer readable storage medium, may be used to store a non-transitory software program as well as a non-transitory computer executable program, such as the field of view augmentation method described in embodiments of the present invention. The processor 101 implements the field of view augmentation method described above by running non-transitory software programs and instructions stored in the memory 102.
The memory 102 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data for performing the field of view augmentation method described above. Further, the memory 102 may include a high-speed random access memory 102, and may also include a non-transitory memory 102, such as at least one storage device memory device, flash memory device, or other non-transitory solid state memory device. In some implementations, the memory 102 optionally includes memory 102 remotely located relative to the processor 101, the remote memory 102 being connectable to the electronic device 100 through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The non-transitory software programs and instructions required to implement the above-described field-of-view augmentation method are stored in the memory 102, and when executed by the one or more processors 101, the above-described field-of-view augmentation method is performed, for example, method steps S101 to S103 in fig. 1, method steps S201 to S202 in fig. 3, method steps S301 to S302 in fig. 5, method steps S401 to S402 in fig. 6, method steps S501 to S502 in fig. 7, method steps S601 to S602 in fig. 8, method steps S701 to S705 in fig. 11, method steps S801 to S804 in fig. 13, method steps S901 to S903 in fig. 14, and method steps S1001 to S1003 in fig. 15.
The embodiment of the invention also provides a computer readable storage medium which stores computer executable instructions for executing the visual field amplifying method.
In an embodiment, the computer-readable storage medium stores computer-executable instructions that are executed by one or more control processors, for example, to perform method steps S101 through S103 in fig. 1, method steps S201 through S202 in fig. 3, method steps S301 through S302 in fig. 5, method steps S401 through S402 in fig. 6, method steps S501 through S502 in fig. 7, method steps S601 through S602 in fig. 8, method steps S701 through S705 in fig. 11, method steps S801 through S804 in fig. 13, method steps S901 through S903 in fig. 14, and method steps S1001 through S1003 in fig. 15.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, storage device storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically include computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information delivery media.
It should also be appreciated that the various embodiments provided by the embodiments of the present invention may be arbitrarily combined to achieve different technical effects.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the above embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit and scope of the present invention, and these equivalent modifications or substitutions are included in the scope of the present invention as defined in the appended claims.

Claims (13)

1. A method for amplifying a field of view, comprising:
acquiring main visual field information of a target object, wherein the main visual field information comprises at least one of a main visual field image or a main visual field direction;
acquiring a surrounding environment image of the expanded visual field range and the target object position;
and obtaining an amplified view field image with continuous content according to the main view field information, the surrounding environment image and the amplified view field range, wherein the amplified view field image comprises the main view field image and the auxiliary view field image, and the display proportion of the main view field image is different from the display proportion of the amplified view field image.
2. The method of claim 1, wherein the main view information includes the main view image, and the obtaining an amplified view image with continuous content according to the main view information, the surrounding image, and the amplified view range includes:
Determining an auxiliary view image from the surrounding environment image according to the main view image and the expanded view range, wherein the content of the auxiliary view image is continuously transited with the content of the main view image;
and carrying out image fusion on the auxiliary view image and the main view image to obtain an amplified view image with continuous content.
3. The method of claim 1, wherein the main visual field information includes the main visual field direction, and the obtaining an amplified visual field image with continuous content according to the main visual field information, the surrounding image and the amplified visual field range includes:
determining an amplified view direction from the surrounding image according to the main view direction;
and obtaining an amplified view image with continuous content from the surrounding environment image according to the amplified view direction and the amplified view range, wherein the amplified view image comprises the main view image and the auxiliary view image.
4. The method for amplifying a field of view according to claim 2, wherein the performing image fusion on the auxiliary field of view image and the main field of view image to obtain an amplified field of view image with continuous content comprises:
Fusing the auxiliary view image around the main view image to obtain a preprocessed image;
and compressing the preprocessed image to maintain the display proportion of the central visual field of the preprocessed image and reduce the display proportion of the peripheral visual field, so as to obtain an amplified visual field image with continuous compressed contents.
5. The method of claim 3, wherein obtaining an amplified field of view image having a continuous content from the surrounding image based on the amplified field of view direction and the amplified field of view range, comprises:
obtaining a preprocessed image from the surrounding image according to the amplified view direction and the amplified view range;
and compressing the preprocessed image to maintain the display proportion of the central visual field of the preprocessed image and reduce the display proportion of the peripheral visual field, so as to obtain an amplified visual field image with continuous compressed contents.
6. The visual field amplification method according to claim 4 or 5, wherein the compressing the pre-processed image to maintain the display ratio of the central visual field of the pre-processed image and to reduce the display ratio of the peripheral visual field to obtain the amplified visual field image with continuous compressed contents comprises:
Acquiring an adjusting function for performing visual field compression adjustment;
and carrying out non-uniform compression on the preprocessed image according to the adjusting function so as to maintain the display proportion of the central visual field of the preprocessed image and reduce the display proportion of the peripheral visual field, and obtaining the continuously-amplified visual field image with compressed contents.
7. The view expansion method according to claim 6, wherein the adjustment function includes a first piecewise function for maintaining a display ratio of an image and a second piecewise function for reducing a display ratio of an image, the non-uniformly compressing the preprocessed image after fusion according to the adjustment function to maintain a display ratio of a central view of the preprocessed image and reduce a display ratio of a peripheral view, and the obtaining an expanded view image with continuous compressed contents includes:
acquiring a first display range of a central visual field;
acquiring a second display range of the surrounding field of view;
processing the preprocessed image in the first display range according to the first piecewise function so as to maintain the display proportion of the central view of the preprocessed image;
non-uniform compression processing is carried out on the preprocessed image in the second display range according to the second segmentation function so as to reduce the display proportion of the peripheral visual field of the preprocessed image;
And determining the preprocessed image after processing as an amplified view image.
8. The visual field amplification method as set forth in any one of claims 2 to 5, wherein the determining a secondary visual field image from the surrounding image based on the primary visual field image and the amplified visual field range includes:
acquiring a main visual field center of a target object;
determining a main visual field range of a target object according to the main visual field image;
determining an expanded field of view center from the ambient image based on the primary field of view center;
and taking the center of the amplified visual field as the center position in the surrounding environment image, and determining an auxiliary visual field image from the surrounding environment image according to the main visual field range and the amplified visual field range.
9. The method according to any one of claims 1 to 5, wherein obtaining an amplified field image having continuous content from the main field information, the surrounding image, and the amplified field range comprises:
acquiring updated main visual field information;
acquiring an updated surrounding environment image of the position of the target object in real time;
updating the amplified field of view image based on the updated primary field of view information, the updated ambient image, and the amplified field of view range.
10. The visual field expansion method according to any one of claims 1 to 5, wherein the surrounding image is a panoramic visual field image centered on a target object;
obtaining the amplified view field image with continuous content according to the main view field information, the surrounding environment image and the amplified view field range, wherein the method comprises the following steps:
and obtaining an expanded view image with continuous content according to the main view information, the panoramic view image and the expanded view range.
11. The method according to any one of claims 1 to 5, wherein obtaining an amplified field image having continuous content from the main field information, the surrounding image, and the amplified field range comprises:
acquiring visual angle selection information of a target object;
determining a first person viewing angle mode or a third person viewing angle mode watched in the target object visual field expansion according to the viewing angle selection information;
and obtaining an amplified view field image in the first person view angle mode or the third person view angle mode according to the main view field information, the surrounding environment image and the amplified view field range.
12. An electronic device, comprising: a memory, a processor storing a computer program, the processor executing the computer program implementing the field of view amplification method of any one of claims 1 to 11.
13. A computer-readable storage medium, wherein the storage medium stores a program that is executed by a processor to implement the visual field amplification method according to any one of claims 1 to 11.
CN202210563486.4A 2022-05-23 2022-05-23 Visual field expansion method, electronic device, and storage medium Pending CN117149032A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210563486.4A CN117149032A (en) 2022-05-23 2022-05-23 Visual field expansion method, electronic device, and storage medium
PCT/CN2023/083433 WO2023226570A1 (en) 2022-05-23 2023-03-23 Field-of-view enlarging method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210563486.4A CN117149032A (en) 2022-05-23 2022-05-23 Visual field expansion method, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN117149032A true CN117149032A (en) 2023-12-01

Family

ID=88908674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210563486.4A Pending CN117149032A (en) 2022-05-23 2022-05-23 Visual field expansion method, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN117149032A (en)
WO (1) WO2023226570A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202879261U (en) * 2012-07-25 2013-04-17 广东好帮手电子科技股份有限公司 Motormeter with vision field expanding function and automobile
CN105644442B (en) * 2016-02-19 2018-11-23 深圳市歌美迪电子技术发展有限公司 Method, system and the automobile in a kind of extension display visual field
CN109934076A (en) * 2017-12-19 2019-06-25 广州汽车集团股份有限公司 Generation method, device, system and the terminal device of the scene image of vision dead zone
CN109741455B (en) * 2018-12-10 2022-11-29 深圳开阳电子股份有限公司 Vehicle-mounted stereoscopic panoramic display method, computer readable storage medium and system
CN110509851B (en) * 2019-08-09 2023-03-17 上海豫兴电子科技有限公司 Multi-curvature electronic rearview mirror with follow-up display

Also Published As

Publication number Publication date
WO2023226570A1 (en) 2023-11-30

Similar Documents

Publication Publication Date Title
CN112703464B (en) Distributed gaze point rendering based on user gaze
US11119319B2 (en) Rendering device, head-mounted display, image transmission method, and image correction method
US20210350762A1 (en) Image processing device and image processing method
US10943359B2 (en) Single depth tracked accommodation-vergence solutions
EP3163422B1 (en) Information processing device, information processing method, computer program, and image processing system
KR101741335B1 (en) Holographic displaying method and device based on human eyes tracking
US20150172643A1 (en) Electronic Device and Recording Medium
CN110708533B (en) Visual assistance method based on augmented reality and intelligent wearable device
US20110234475A1 (en) Head-mounted display device
CN108259883B (en) Image processing method, head-mounted display, and readable storage medium
US20150317956A1 (en) Head mounted display utilizing compressed imagery in the visual periphery
CN108762496B (en) Information processing method and electronic equipment
JP2017204674A (en) Imaging device, head-mounted display, information processing system, and information processing method
CN109901290B (en) Method and device for determining gazing area and wearable device
JPWO2018211672A1 (en) Image generation apparatus, image display system, and image generation method
CN103517060A (en) Method and device for display control of terminal device
US11039124B2 (en) Information processing apparatus, information processing method, and recording medium
KR20170140277A (en) Glasses structure that enables image enhancement
JP2019095916A (en) Image generation device, head-mounted display, image generation system, image generation method, and program
US20210063732A1 (en) Image processing apparatus, method for controlling the same, non-transitory computer-readable storage medium, and system
CN117149032A (en) Visual field expansion method, electronic device, and storage medium
KR101931295B1 (en) Remote image playback apparatus
JP2015007722A (en) Image display device
WO2022004130A1 (en) Information processing device, information processing method, and storage medium
CN115686181A (en) Display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication