CN114518856A - Image display system and image display method for preventing motion sickness - Google Patents

Image display system and image display method for preventing motion sickness Download PDF

Info

Publication number
CN114518856A
CN114518856A CN202111114413.9A CN202111114413A CN114518856A CN 114518856 A CN114518856 A CN 114518856A CN 202111114413 A CN202111114413 A CN 202111114413A CN 114518856 A CN114518856 A CN 114518856A
Authority
CN
China
Prior art keywords
image
display
change rate
user
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111114413.9A
Other languages
Chinese (zh)
Inventor
李健儒
陈冠廷
蔡宇翔
余姿仪
戴宏明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from TW110133053A external-priority patent/TWI790738B/en
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Publication of CN114518856A publication Critical patent/CN114518856A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses an image display system and an image display method for preventing motion sickness. The image display method comprises the following steps. The field of view of the user on the display is acquired. An external image of the other side of the display with respect to the user is extracted, the external image having a first area and a second area. And calculating the image change rate of the first area and the image change rate of the second area. The attitude of the vehicle is sensed. And displaying at least one image on the display according to the visual field range, the image change rate of the first area, the image change rate of the second area and the posture of the carrier.

Description

Image display system and image display method for preventing motion sickness
Technical Field
The present invention relates to an image display system and an image display method, and more particularly, to an image display system and an image display method for preventing motion sickness in a vehicle.
Background
Vehicle shaking, which occurs when the vehicle is traveling, can cause the user of the vehicle (the driver or passenger) to experience a feeling of discomfort of dizziness, known as the "motion sickness" symptom. More specifically, when a user takes a vehicle and views an external scene, the external scene rapidly moves and changes relative to the vehicle, so that the vestibular perception motion state of the user conflicts with the spatial position, and motion sickness of the user is caused.
In order to reduce motion sickness, the user can fixate as far as possible on a constant scene, change the sitting posture of the user, improve the air quality in the vehicle or take anti-dizzy drugs. However, this approach may not be effective in suppressing motion sickness when the vehicle travels over long distances in different fields or under different road conditions. Accordingly, the related industries in the art are working on solutions that are more effective in improving the motion sickness symptoms of users.
Disclosure of Invention
The technical scheme of the disclosure is to provide an image display system, which comprises a display, a first information extraction device, a second information extraction device, a sensor and a processing device. The first information extraction device is used for acquiring the visual field range of a user on the display. The second information extraction device is used for extracting an external image of the other side of the display relative to the user, and the external image is provided with a first area and a second area. The sensor is used for sensing the attitude of the vehicle. The processing device is used for generating at least one image and displaying the at least one image on the display according to the visual field range, the image change rate of the first area, the image change rate of the second area and the posture of the carrier.
Another technical aspect of the present disclosure provides an image display method including the following steps. The field of view of the user on the display is acquired. An external image of the other side of the display with respect to the user is extracted, the external image having a first area and a second area. And calculating the image change rate of the first area and the image change rate of the second area. The attitude of the vehicle is sensed. And displaying at least one image on the display according to the visual field range, the image change rate of the first area, the image change rate of the second area and the posture of the carrier.
Other aspects and advantages of the invention will become apparent upon review of the following drawings, detailed description and claims.
Drawings
Fig. 1 is a block diagram of an image display system according to a first embodiment of the present disclosure.
Fig. 2 and 3 are schematic views of display positions of images according to the present disclosure.
Fig. 4 is a schematic diagram of a determination method of an image change rate according to a first embodiment of the disclosure.
Fig. 5 is a flowchart of a method for determining an image change rate according to a first embodiment of the disclosure.
Fig. 6A to 6D are schematic diagrams of a display position and/or display angle adjustment method of an image according to the present disclosure.
Fig. 7A to 7B are schematic diagrams illustrating types and/or color selection modes of images according to the present disclosure.
FIG. 8 is a flow chart illustrating a method for selecting image types and/or colors according to the present disclosure.
Fig. 9 is a schematic diagram illustrating a determination method of an external image reference object according to the present disclosure.
Fig. 10 is a flowchart of a method for determining a reference target of an external image according to the present disclosure.
Fig. 11 is a flowchart of an image display method according to a first embodiment of the disclosure.
Fig. 12A and 12B are flowcharts illustrating an image display method according to a second embodiment of the disclosure.
Fig. 13A and 13B are flowcharts illustrating an image display method according to a third embodiment of the disclosure.
Fig. 14A is a flowchart of an image display method according to a fourth embodiment of the disclosure.
Fig. 14B and 14C are schematic diagrams illustrating a single window method or a ring cabin method for displaying images in the image display method according to the fourth embodiment.
Fig. 14D is a schematic diagram of selectively performing a masking process on an external image in an image display method according to a fourth embodiment of the disclosure.
Fig. 15A and 15B are flowcharts illustrating an image display method according to a fifth embodiment of the disclosure.
Fig. 16A is a flowchart of an image display method according to a sixth embodiment of the disclosure.
FIG. 16B is a block diagram of an image display system incorporating the image display method of the sixth embodiment.
Fig. 16C is a schematic diagram illustrating a display of prompt information in the image display method according to the sixth embodiment.
[ notation ] to show
100 image display system
102 display
104 first information extraction means
106 second information extraction means
108 sensor
110 processing device
202 user
302 head of user
304 eyes of user
C1, C2 image Rate of Change
P is a posture
P _ f subsequent time attitude
I1 first region
I2 second region
V field of view
V1 Main field of view region
V2 area of sub-visual field
G is image
G _ L left endpoint
G _ R, right endpoint
Ht prompt information
IMG external image
402, 404, 410, 412 blocks
406, 408, 414, 416 objects
418 display information
602 carrier
902 object
1402 carrier
1404 window of vehicle
1408 front windshield
1410 rear windshield
502 to 520, step
802 to 814 steps
1100, 1200, 1300, 1400, 1500, 1600, image display methods 1002-1014, 1102-1116, 1202-1216, 1302-1316, 1402-1420, 1502-1516, 1602-1616
Detailed Description
The technical terms in the specification refer to common terms in the technical field, and if a part of the term is described or defined in the specification, the part of the term is interpreted according to the description or the definition in the specification. Various embodiments of the present disclosure each have one or more technical features. A person skilled in the art may selectively implement some or all of the features of any of the embodiments, or selectively combine some or all of the features of the embodiments, where possible.
Fig. 1 is a block diagram of an image display system 100 according to a first embodiment of the disclosure, and fig. 2 and 3 are schematic diagrams illustrating a display position of an image G according to the disclosure. Referring to fig. 1, an image display system 100 according to a first embodiment of the disclosure includes a display 102, at least one first information extraction device 104, at least one second information extraction device 106, at least one sensor 108, and a processing device 110.
Please refer to fig. 1 to fig. 3. The image display system 100 of the present disclosure may be applied to a vehicle (e.g., a vehicle, a ship, etc.), and the display 102 of the image display system 100 may be disposed on the left and right windows or the front and rear windshields of the vehicle. The user 202 is seated inside the vehicle with the set position of the display 102 as a reference position, the user 202 being located inside the display 102 and the external scene being located outside the display 102. When the user 202 views the display 102, the user 202 has a visual field V on the display 102, the first information extraction device 104 of the image display system 100 is used for acquiring the visual field V of the user 202 on the display 102, and the second information extraction device 106 is used for extracting an external image IMG on the other side of the display 102 relative to the user 202.
The external image IMG has a first region I1 and a second region I2, and the second information extraction means 106 is capable of calculating the image change rate C1 of the first region I1 and the image change rate C2 of the second region I2. In another aspect example, the image change rate C1 of the first area I1 and the image change rate C2 of the second area I2 of the external image IMG may be calculated by the processing device 110.
The attitude P of the vehicle may change while the vehicle is moving, and the sensor 108 of the image display system 100 may be configured to sense the attitude P of the vehicle. The processing device 110 may generate an image G for display on the display 102, where the image G may serve as a visually constant reference target for the user 202. Furthermore, the processing device 110 can adjust the display mode of the image G on the display 102, for example, adjust the display position or the display angle of the image G, according to the visual field V of the user 202 acquired by the first information extraction device 104, the image change rates C1 and C2 of the first region I1 and the second region I2 of the external image IMG calculated by the second information extraction device 106 or the processing device 110, and the posture P of the vehicle sensed by the sensor 108.
Due to vehicle shake, attitude changes, or rapid changes in external scenery, a sensory conflict between the previously perceived motion state and spatial location of user 202 (e.g., the driver or passenger of the vehicle) may result, causing a symptom of motion sickness to user 202. The image display system 100 of the present disclosure can generate and display the image G on the display 102, and the user 202 can view the image G on the display 102 inside the vehicle. The image G generated by the image display system 100 is in a relatively constant position and remains relatively constant compared to the fast-changing external image IMG, so that the image G can be used as a visually constant reference for the user 202, thereby reducing the feeling conflict of the user 202 and alleviating the motion sickness of the user 202. The above is an overview of the operation and technical effects of the image display system 100 of the present disclosure, and the detailed operation of each device or element of the image display system 100 will be described below.
More specifically, referring to fig. 1 to fig. 3, the display 102 is, for example, a light-permeable flat panel display, and the user 202 can look through the display 102 to view the external scene of the vehicle on the other side of the display 102. In an example, the display 102 may be disposed on a window portion of the vehicle. In another example, the display 102 may be a ring cabin type, and is disposed around the vehicle cabin on the front and rear windshields and the left and right windows of the vehicle.
The first information extraction device 104 is, for example, a camera, a light sensing array, a low power radar, or any other form of sensing device. The first information extraction device 104 may be disposed inside the vehicle, and monitors and analyzes the position and movement status of the head 302 or eyes 304 of the user 202. In one embodiment, the first information extraction device 104 monitors and analyzes the position and motion state of the head 302 or eyes 304 of the user 202 in real-time (real-time), and can immediately perform the analysis process in response to the real-time dynamic changes of the head 302 or eyes 304 of the user 202. In an example, the image display system 100 may use an edge computing (edge computing) method to calculate the current direction of the user's 202 vision by a processor (not shown) disposed in the first information extraction device 104, so as to obtain the visual field V when the user 202 views the display 102, and further divide the visual field V into a main visual field V1 and a sub visual field V2. The main visual field region V1 is a region where the user's sight is mainly concentrated, for example, the upper half of the visual field region V; the sub-visual field V2 is the visual field of the user 202, such as the lower half of the visual field V. In another aspect example, the main visual field region V1 is, for example, a central position region (comparable to an egg yolk area) of the visual field range V, and the sub visual field region V2 is, for example, a peripheral region (comparable to a protein area) of the visual field range V. In other words, the primary viewing zone V1 can be a central region where the user 202's vision is mainly concentrated, and the secondary viewing zone V2 can be a peripheral region where the user 202's vision remains. In another aspect, the image display system 100 may also use a central computing (central computing) method, and calculate the current direction of sight of the user 202 by the processing device 110, obtain the visual field V of the user 202, and further divide the visual field V into a main visual field V1 and a sub visual field V2.
After the first information extraction device 104 or the processing device 110 calculates the current visual field range V of the user 202, the processing device 110 can display the image G within the visual field range V. Further, the main view region V1 of the view field V may be a region where the user's 202 vision is mainly concentrated. In order not to obstruct the primary line of sight of the user 202, in one embodiment the image G may be displayed in a region outside the main field of view region V1, such as: the image G may be displayed in a lower half area, a peripheral area, or an edge of the visual field V of the user 202. That is, the image G may be displayed within the sub-visual field region V2, or the image G may be displayed on an edge line of the sub-visual field region V2, and other display information may be displayed on the main visual field region V1.
On the other hand, the second information extraction device 106 is, for example, a camera or any other form of image extraction device or image sensing device. The second information extraction device 106 may be disposed outside the vehicle, and extracts an image including an external subject as an external image IMG. In one embodiment, the second information extraction device 106 may extract the image containing the external scene as the external image IMG in real time, so that the extracted external image IMG includes the external scene viewed by the user 202 at the time. Also, the second information extraction device 106 may have a simple microprocessor to perform analysis processing on the external image IMG. If the scene outside the vehicle is rapidly changed relative to the vehicle, the external image IMG extracted by the second information extraction device 106 is also rapidly changed, so that the objects in the external image IMG have different similarity degrees. The second information extraction device 106 determines and calculates the image change rate of the external image IMG according to the similarity of the color, brightness and/or shape of the object in the external image IMG. Referring to fig. 3 again, in an example, the image display system 100 may employ, for example, an edge operation method, the second information extraction device 106 divides the external image IMG into at least a first region I1 and a second region I2, and the second information extraction device 106 calculates an image change rate C1 of the first region I1 and an image change rate C2 of the second region I2 respectively. In another aspect, for example, the image display system 100 may also calculate the image change rate C1 of the first area I1 and the image change rate C2 of the second area I2 by the processing device 110, for example, by a central computing method, so as to obtain an area with a higher image change rate: for example, it is found that the first region I1 has a higher image change rate C1 than the image change rate C2 of the second region I2. Then, the first region I1 with the higher image change rate C1 is selected by the second information extraction device 106 and/or the processing device 110, and the image G is displayed on the selected first region I1 by the processing device 110.
In summary, the suitable display position of the image G displayed on the display 102 may correspond to the edge line of the sub-visual field region V2 or the sub-visual field region V2 of the visual field range V of the user 202, or to the first region I1 of the external image IMG having a higher image change rate C1. In one example, the image G may preferentially select the sub-visual field region V2 displayed in the visual field range V of the user 202; if the sub-field of view region V2 also meets the condition of "having a higher image change rate", the image G is displayed in the sub-field of view region V2 and the sub-field of view region V2 may be completely overlapped or partially overlapped with the first region I1 corresponding to the image change rate C1 having a higher image change rate.
Next, the detailed operation of the second information extraction device 106 or the processing device 110 for determining and calculating the image change rate of the external image IMG is described below with reference to the schematic diagram of fig. 4.
Fig. 4 is a schematic diagram illustrating a method for determining an image change rate according to a first embodiment of the disclosure, and fig. 5 is a flowchart illustrating a method for determining an image change rate according to the first embodiment of the disclosure. Referring to fig. 4 and fig. 5, the second information extracting means 106 or the processing means 110 may analyze the similarity of various attributes (e.g., color, brightness and/or shape, etc.) of the image within the first region I1 of the external image IMG, and accordingly determine the image change rate C1 within the first region I1. Similarly, the similarity of the above-mentioned attributes of the images in the second region I2 can be analyzed to determine the image change rate C2 in the second region I2. In the present embodiment, the degree of similarity of the above-described attributes of the images within the range of the first region I1 (or the second region I2) is analyzed in a sampling manner. The sampling mode is, for example: one or more blocks 410, 412 are sampled within the external image IMG at a single point in time (corresponding to steps 502, 504). In one example, block 410 is a larger range block, block 412 is a smaller range block, and block 410 may substantially include block 412 (i.e., block 412 may be fully included or partially included within block 410). In another aspect, block 410 may also be adjacent to block 412 (i.e., block 412 may not be encompassed by block 410). Also, the second information extraction device 106 or the processing device 110 may further analyze the attributes (e.g., color, brightness, shape, etc.) of the objects 414 and 416 included in the range of the block 410 and the attributes of the object 416 included in the range of the block 412, and compare the similarity between the attributes of the block 410 and the attributes of the block 412 (corresponding to step 506).
In this embodiment, the blocks 410 and 412 may be close to the user, and the relative movement speed of the objects 414 and 416 with respect to the user is fast. The object 414 in block 410 may be, for example, a building or the like near the vehicle, while the object 416 in block 412 may be, for example, a road or a landscape near the vehicle. If the objects 414, 416 are visually rich in the user's constituent elements (e.g., the objects 414, 416 include road signs, traffic lights, pedestrian lanes, shop signs, buildings, neighboring vehicles, or pedestrians), the objects 414, 416 are visually complex for the user.
Accordingly, if the relative movement speed of the objects 414 and 416 with respect to the user is fast, or if the complexity of the objects 414 and 416 in the user's vision is high, the degree of similarity between the above-mentioned properties (color, brightness and/or shape) of the block 410 and the above-mentioned properties (color, brightness and/or shape) of the block 412 is low for the user's visual perception. In other words, if the degree of disorder of the objects 414 and 416 in the user's vision is high (the relative moving speed is fast or the complexity is high), the similarity between the attributes of the block 410 and the attributes of the block 412 is low. Accordingly, based on the similarity between the sampled blocks 410 and 412, the second information extraction device 106 or the processing device 110 can further determine the image change rate C1 of the first area I1 where the blocks 410 and 412 are located (corresponding to step 508).
Alternatively, the second information extraction device 106 or the processing device 110 may sample and select one or more other blocks 402 and 404 (corresponding to steps 510 and 512) in the external image IMG at the same time point. Block 402 may be a larger range block, block 404 may be a smaller range block, and block 402 may generally include block 404. Alternatively, block 402 may also be adjacent to block 404 (i.e., block 402 may also not include block 404). Moreover, the second information extraction device 106 or the processing device 110 may further analyze and compare the similarity between the attributes (e.g., color, brightness, shape, etc.) of the object 408 contained in the range of the block 404 and the attributes of the objects 406 and 408 contained in the range of the block 402 (corresponding to step 514), and determine the image change rate C2 of the second area I2 where the blocks 402 and 404 are located according to the similarity between the attributes of the block 404 and the attributes of the block 402 (corresponding to step 516).
In the present embodiment, for example, the blocks 402 and 404 may be distant vision areas from the user, and the relative movement speed of the objects 406 and 408 with respect to the user is slow. The object 408 in the block 404 is, for example, a mountain or the like remote from the vehicle, and the other object 406 in the block 402 is, for example, a cloud or the like remote from the vehicle. In addition, if the objects 406 and 408 have fewer components in the user's visual sense and are monotonous in the user's visual perception, the complexity of the objects 406 and 408 in the user's visual sense is low. If the relative movement speed of the objects 406, 408 within the blocks 402, 404 is slow or the complexity is low, the blocks 402, 404 have a low degree of disorder for the user to visually perceive, and therefore the degree of similarity between the color, brightness and/or shape of the block 404 and the block 402 is high. Accordingly, the second information extraction device 106 or the processing device 110 can determine the image change rate C2 of the sampled and selected blocks 402, 404 (corresponding to step 516).
Next, the second information extracting means 106 or the processing means 110 compares the image change rate C1 of the first area I1 with the image change rate C2 of the second area I2 (corresponding to step 518). As described above, in the present embodiment, it can be compared that the image change rate C1 of the first area I1 is higher than the image change rate C2 of the second area I2.
The image change rate C1 is high, which easily causes the visual perception of the user to be inconsistent, so that the processing device 110 can display the image G in the first region I1 with the high image change rate C1 (corresponding to step 520) as a visually constant reference target for the user. In addition to serving as a user constant reference target, the image G may have a larger size to mask a scene with higher complexity in the first region I1 with a higher image change rate C1, thereby eliminating the user's visual complexity interference. On the other hand, the second region I2 having the lower image change rate C2 may display other display information 418 (e.g., characters) on the contrary.
In another comparative example, a plurality of images are extracted at a plurality of time points, respectively, and an image change rate is calculated from a change between different images. Unlike the comparative example described above, in the method of determining an image change rate according to the first embodiment of the present disclosure, a single image (external image IMG1) is extracted at a single time point, and a plurality of blocks (for example, blocks 402, 404, 410, 412) within the single image are analyzed, so that the image change rate can be immediately determined. In other words, the technical scheme of the disclosure can judge the image change rate at a single time point in real time, and can determine the display position of the image IMG in real time according to the real-time judgment result, thereby having the technical effect of real-time processing (real-time processing).
The above-described embodiment is to determine the display position of the image G according to the image change rate of the external image IMG, but the technical solution of the present disclosure is not limited thereto. The processing device 110 can also adjust the display position and the display angle of the image G according to the posture P of the vehicle. The sensor 108 of the image display system 100 is, for example, a gyroscope, a level meter or a six-axis sensor, which can be used to sense the attitude P of the vehicle, for example, sense the inclination or displacement of the vehicle in the front, back, left and right directions. Fig. 6A to 6D are schematic diagrams illustrating a display position and/or a display angle adjustment manner of an image G according to the present disclosure, first referring to fig. 6A, a sensor 108 senses that an attitude P of a carrier 602 is forward tilting, and senses a displacement caused by forward tilting of the carrier 602; accordingly, the processing device 110 executes a constant position algorithm according to the change of the attitude P of the vehicle 602 to display the image G at a constant display position on the display 102, so as to compensate for the displacement caused by the change of the attitude of the vehicle 602.
Furthermore, in an alternative embodiment, the processing device 110 may also adjust the display angle of the image G on the display 102 in response to a change in the pose P of the vehicle 602. For example, the display angle of the image may be adjusted to an angle complementary (opposite) to the posture P of the vehicle 602 according to the habit of the user, so that the image G presented on the display 102 is constant for the user's vision, i.e., the position does not change and the angle does not change. In an example, the image G is an image of the appearance of a mailbox, and if the posture P of the carrier 602 is tilted forward, the display angle of the mailbox is correspondingly adjusted to be an upward viewing angle. Referring to fig. 6B, if the sensor 108 senses that the carrier 602 is in a normal posture and not tilted, the display angle of the mailbox is the front view angle. Referring back to fig. 6C, if the sensor 108 senses that the attitude P of the carrier 602 is tilted backward, the display angle of the mailbox is a top view angle. In another aspect example, the display angle of the image G may be adjusted to be the same as the posture change of the vehicle 602 according to different habits of the user. For example, if the attitude P of the carrier 602 is forward tilted, the display angle of the mail box is adjusted to a top-view angle; if the carrier 602 is tilted backward, the display angle of the mailbox is adjusted to be upward.
In another example of the present disclosure, image G may also be a geometric figure. As shown in fig. 6D, the image G is a bar-shaped image, and the display angle of the bar-shaped image G on the display 102 is complementary to the posture P of the carrier 602. For example, in the example of fig. 6D, the posture P of the vehicle 602 is tilted backward, and the display angle of the bar-shaped image G is tilted forward, so that the bar-shaped image G viewed by the user in the vehicle 602 is still in the horizontal direction. In the example of fig. 6D, the left end G _ L of the bar-shaped image G can be aligned with the boundary between the front door window of the vehicle 602 and the "a pillar (the body pillar in front of the vehicle)" of the vehicle 602.
The image G is not limited to a static single pattern, but may be a dynamic pattern with three-axis dynamic display. The dynamic image may dynamically change the display position or display angle of the image G in real time in response to a change in the attitude of the vehicle 602. For example, if it is sensed in real time that the vehicle 602 is tilted forward, the display angle of the image G may be dynamically changed to the overhead angle in real time.
In addition to determining the display position of the image G according to the image change rate of the external image IMG or adjusting the display angle of the image G according to the posture P of the carrier, the processing device 110 may further select the type and/or color of the image G according to the field where the carrier 602 is located. Fig. 7A and 7B are schematic diagrams illustrating a type and/or color selection method of an image G according to the present disclosure, and fig. 8 is a flowchart illustrating a type and/or color selection method of an image G according to the present disclosure.
Referring to fig. 7A, 7B, and 8, the image display system 100 may further include a positioning device (e.g., GPS) to locate and obtain geographic coordinates of the user or the vehicle, and the processing device 110 of the image display system 100 may determine the field where the user or the vehicle is located according to the geographic coordinates (corresponding to steps 802 and 804).
If the processing device 110 determines that the domain is an urban street (as shown in fig. 7A), the type of the image G may be selected from objects commonly found in urban streets, such as a mailbox, as the image G, so that the image G is easily merged into the external image IMG of the urban street domain (corresponding to step 806). In another example, if the processing device 110 determines that the area is a country (as shown in fig. 7B), the type of the image G may be an object commonly found in the country, such as a tree, as the image G, so that the image G is easily merged into the external image IMG of the country (corresponding to step 806).
After the second information extraction device 104 obtains the external image IMG of the vehicle (corresponding to step 808), the processing device 110 can further analyze the color or hue of the external image IMG (corresponding to step 810) and adjust the color of the image G accordingly. If the external image IMG is a city street view and exhibits a gray-white color tone of the exterior wall of the building (as shown in fig. 7A), the color of the image G may be adjusted to be a complementary color of a gray-white color or an opposite color, so that the image G is highlighted in the external image IMG (corresponding to step 812), for example, the color of the image G is adjusted to be black (which is an opposite color of white). In another example, if the external image IMG is a rural grassland and exhibits a green hue (as shown in fig. 7B), the color of the image G may be adjusted to red (which is a complementary color of green), so that the image G is highlighted from the external image IMG (corresponding to step 812). Then, an adjusted image G can be generated according to the type and color of the selected image G (corresponding to step 814).
The examples described above show image G in display 102 as a visually constant reference target for the user. In other examples, if the external image IMG has a readily available reference mark, the image G may also be selected not to be displayed on the display 102. Fig. 9 is a schematic diagram illustrating a method for determining a reference target of the external image IMG according to the present disclosure, and fig. 10 is a flowchart illustrating a method for determining a reference target of the external image IMG according to the present disclosure.
Referring to fig. 9 and 10, the external image IMG is analyzed by the second information extraction device 106 or the processing device 110 to determine whether the external image IMG has a ready reference mark. First, the external image IMG is judged to be blurred or sharp (corresponding to step 1002). For example, the sharpness of the edges of the object 902 of the external image IMG may be determined. If the edge of the object 902 is a discontinuous break point, the object 902 is blurred and cannot be used as a reference target (corresponding to step 1004), and the processing device 110 can automatically generate an image G to be displayed on the display 102 (corresponding to step 1006).
In contrast, if the edge of the object 902 is a continuous contour, the external image IMG is determined to be clear, and a clear object 902 with a sharp and continuous contour may be used as a ready-to-use reference object. At this time, the gray level value of the external image IMG may be optionally further analyzed to determine whether the external image IMG is in a dim state due to low brightness (corresponding to step 1008). If the external image IMG is high-brightness, the object 902 of the external image IMG is sharp and clear in outline and has high brightness, and this object 902 will probably be used as a ready-to-use reference target.
At this time, the topography of the object 902 of the external image IMG may be further analyzed to determine whether the time for which the topography of the object 902 stays within the display range of the display 102 is greater than a threshold value (corresponding to step 1010), and if the stay time is greater than the threshold value, the object 902 is relatively in a constant position and is favorable to serve as a visual reference for the user (corresponding to step 1012). For example, the object 902 in the external image IMG is a distant mountain, the time of the distant mountain staying in a specific display range of the display 102 is greater than a threshold value, which indicates that the distant mountain is relatively in a constant position, and the contour of the distant mountain is sharp and clear and has high brightness, so the distant mountain can be used as a reference target. At this time, the processing device 110 does not necessarily need to generate the image G, and the user may determine whether to generate the image G and display the image G on the display 102 (corresponding to step 1014). Wherein, the steps 1002, 1008 and 1010 can be executed without sequence relation according to requirement.
The detailed implementation of each embodiment of the image display system 100 of the present disclosure is described above, and the image display method implemented in conjunction with the image display system 100 is described below. Fig. 11 is a flowchart of an image display method 1100 according to a first embodiment of the disclosure, please refer to fig. 11 (and fig. 1-4). First, in step 1102, the head and/or eyes of the user may be monitored by the first information extraction device 104 to obtain the visual line direction of the user, and the visual line range V of the user 202 on the display 102 of the image display system 100 may be obtained by the first information extraction device 104 or the processing device 110.
Then, in step 1104, at least one image G can be displayed on the viewing range V of the display 102 by the processing device 110. More specifically, the image G is displayed in a region other than the main line of sight of the user, that is, the image G is displayed in the sub-visual field region V2 where the user's excessive light reaches. In step 1106, the external image IMG on the other side of the display 102 relative to the user 202 may be extracted by the second information extraction device 106.
Next, in step 1108, the image change rate C1 of the first area I1 of the external image IMG and the image change rate C2 of the second area I2 of the external image IMG can be calculated by the second information extraction device 106 or the processing device 110. More specifically, the similarity of various attributes (such as color, brightness and/or shape, etc.) in one or more blocks of the external image IMG can be analyzed, and the visual complexity of objects in the blocks can be analyzed to determine the image change rate of the area of the external image IMG in which the blocks are located.
From step 1110, the image G may be displayed in the first region I1 with a higher image change rate C1 of the external image IMG by the processing device 110, so that the image G serves as a visually constant reference for the user to reduce the motion sickness of the user.
Referring to fig. 6A to 6D, in step 1112, the attitude P of the carrier 602 may be sensed by the sensor 108, and the constant display position of the image G may be calculated according to the attitude P of the carrier 602. Next, in step 1114, the display position and/or the display angle at which the image G is displayed on the display 102 is adjusted according to the constant display position. Then, at step 1116, the image G is displayed on the display 102 according to the adjusted display position and/or display angle.
Fig. 12A and 12B are flowcharts illustrating an image display method 1200 according to a second embodiment of the disclosure. Steps 1202-1206 and steps 1212-1216 of the image display method 1200 are similar to some of the steps of the image display method 1100 of FIG. 11, and will not be repeated here. On the other hand, steps 1207 through 1210 of the image display method 1200 correspond to the type and/or color selection manner of the image G of fig. 7A, 7B, and 8. In step 1207, the positioning device can be used to position the user or the vehicle, obtain the geographic coordinates of the user or the vehicle, and the processing device 110 can determine the location of the user or the vehicle according to the geographic coordinates, such as a city or a countryside.
Next, in step 1208, the type of the image G is selected according to the field where the carrier is located, so that the image G is easily merged into the external image IMG of the field where the carrier is located. In step 1209, the color or tone of the external image IMG may be analyzed by the processing device 110. Next, in step 1210, the color of the image G is adjusted to be the opposite color or the complementary color of the external image IMG by the processing device 110 according to the color or the tone of the external image IMG.
Fig. 13A and 13B are flowcharts of an image display method 1300 according to a third embodiment of the disclosure, and steps 1302-1310 of the image display method 1300 are similar to a part of steps of the image display method 1100 of fig. 11 or a part of steps of the image display method 1200 of fig. 12, and will not be repeated here.
On the other hand, steps 1312 and 1313 of image display method 1300 correspond to the determination method of the reference object of external image IMG in fig. 9 and 10. Referring to fig. 9, 10, 13A and 13B, in step 1312, it can be determined whether the external image IMG has an existing reference target by the second information extraction device 106 or the processing device 110, for example, it is determined that a distant mountain in the external image IMG can be used as the existing reference target.
Then, in step 1313, if the external image IMG has a ready-made reference mark, the processing device 110 does not necessarily need to generate the image G, and at this time, the user may determine whether to generate the image G and display the image G on the display 102.
If the user decides to use the existing reference target, the processing device 110 does not generate the image G, and the image display method 1300 of this embodiment is ended. However, if the user does not use the existing reference target, the process proceeds from step 1314 to step 1316, and the image G is automatically generated and adjusted by the processing device 110 to serve as the reference target.
Fig. 14A is a flowchart of an image display method 1400 according to a fourth embodiment of the disclosure, and referring to fig. 14A, the image display method 1400 of the fourth embodiment is similar to the image display method 1100 of the first embodiment, with the difference that: the image display method 1400 of the fourth embodiment further includes a step 1418 of displaying the image G in a single-window manner or a ring cabin manner, and a step 1420 may be optionally adopted to perform a blurring process or a shading process on the external image IMG.
Fig. 14B and 14C are schematic diagrams illustrating the image display method 1400 displaying the image G in a single window manner or a ring bin manner in step 1418. Referring to fig. 14B, if the image G is displayed in a single window manner, the image G can be displayed on a single window 1404 of the vehicle 1402. The image G is, for example, in the form of a horizontally disposed strip, which may be displayed, for example, on the window 1404 of a back door. The right end point G _ R of the image G may be aligned with a position at the boundary between the upper edge of the rear door and the "C pillar (body pillar behind the vehicle)" (approximately, an upper position of the door handle of the rear door). Referring to fig. 14C again, if the image G is displayed in the surround mode, the image G can be displayed on all windows and windshields of the vehicle 1402 in a linkage manner. For example, image G is displayed on a left front window 1406, a left rear window 1404, a right front and rear windows (not shown), a front windshield 1408, and a rear windshield 1410 to provide a visual reference for the user to see from a full perspective.
Fig. 14D is a schematic diagram illustrating the step 1420 of the image displaying method 1400 that selectively performs the masking processing on the external image IMG, and referring to fig. 14D, for example, the size of the image G is enlarged to cover most of the display range of the display, so as to mask the outside scenery to reduce the interference of the outside scenery to the user, so as to achieve the effect of eliminating the interference and to more thoroughly eliminate the motion sickness of the user.
Fig. 15A and 15B are flowcharts of an image display method 1500 according to a fifth embodiment of the disclosure, and referring to fig. 15A and 15B, the image display method 1500 of the fifth embodiment is similar to the image display method 1300 of the third embodiment, with the difference that: the image display method 1500 of the fifth embodiment further comprises step 1512 to determine whether the user has motion sickness. If the user has motion sickness, image G needs to be displayed to provide a visually constant reference target for the user, and steps 1514-1516 are performed. In contrast, if the user does not have motion sickness, step 1513 is executed to further determine whether to generate and display the image G.
Fig. 16A is a flowchart of an image display method 1600 according to a sixth embodiment of the disclosure. Referring to fig. 16A, compared to the image display method 1100 of the first embodiment, the image display method 1600 of the sixth embodiment further includes steps 1611 and 1612 to pre-estimate a subsequent time pose P _ f that the vehicle may have at a subsequent time (a future time) and generate the prompt information Ht accordingly. Fig. 16B is a block diagram of an image display system 100B for implementing the image display method 1600, and fig. 16C is a schematic diagram of displaying the prompt information Ht in the image display method 1600.
Referring to fig. 16A-16C, the sensor 108 of the image display system 100B may further include a sensing circuit (not shown) for sensing feedback signals of the steering wheel, brake, throttle, turn signal, etc. of the vehicle. In step 1611, the sensor 108 may estimate in advance the attitude change that the vehicle may have at a future time (subsequent time) based on the feedback signals. For example, sensor 108 may estimate a possible change in the direction of travel of the vehicle based on a feedback signal from steering wheel or turn signal lights, or estimate a possible deceleration of the vehicle based on a feedback signal from braking. Accordingly, the sensor 108 estimates in advance that the subsequent temporal attitude P _ f of the vehicle is: the vehicle turns or the vehicle decelerates. In another aspect, the sensor 108 can also sense a stress condition of the vehicle seat (e.g. a pressure value applied to the seat) to sense a change in the posture of the user 202, so as to estimate a subsequent time posture P _ f of the vehicle as: the vehicle is accelerated. In other aspects, the sensor 108 may also include a light radar (LiDAR) or other image sensing device to detect the road condition in front of the vehicle, and the estimated subsequent time pose P _ f of the vehicle may be: vehicle turning, vehicle acceleration/deceleration, vehicle reverse travel (e.g., reverse garage), etc.
Next, in step 1612, the sensor 108 transmits the estimated vehicle subsequent time attitude P _ f to the processing device 110, and the processing device 110 correspondingly generates the prompt information Ht and displays the prompt information Ht on the display 102. The prompt information Ht is, for example, graphics, text, or a combination of graphics and text, and the user 202 can know in advance the posture change of the vehicle that may occur at the subsequent time through the prompt information Ht; accordingly, the user 202 may have psychological preparation or may change his or her body posture in advance to accommodate future posture changes of the vehicle, thereby reducing motion sickness.
According to the embodiments and examples of various modifications, a technical solution of the present disclosure may utilize a single window or a ring cabin manner to arrange the display 102 on a single window or all windows (including front and rear windshields) of the vehicle. The display 102 may be light transmissive so that a user 202 can see through the display 102 to view a scene external to the vehicle. The scene external to the vehicle may be extracted as an external image IMG and an image G may be generated for display on the display 102, the image G providing a visually constant reference target for the user 202 to reduce motion sickness of the user 202. The display position of the image G can be determined according to the field of view V of the user 202 on the display 102 and the image change rate of the external image IMG, and the display position and the display angle of the image G can be adjusted in real time according to the change of the posture P of the vehicle. Further, the type or color of the image G can be determined according to the type of the field where the user 202 or the vehicle is located and the background color tone, so as to optimize the motion sickness suppression effect of the user 202. One aspect of the present disclosure is more flexible and flexible to provide the user 202 with the option of deciding, based on whether the user 202 has motion sickness symptoms or based on whether the external image IMG has a readily available reference mark, whether the user 202 decides to display the image G.
When the image change rate of the external image IMG is determined, the single external image IMG extracted at a single time point may be analyzed, the image change rate may be determined according to the complexity and similarity of objects in different regions I1 and I2 of the single external image IMG, and it is not necessary to analyze a plurality of images at a plurality of time points, so that the computational resources may be saved. One technical solution of the present disclosure may determine the image change rate at a single time point in real time to determine the display position of the image G to achieve the effect of real-time determination.
While the present invention has been described in detail with reference to the preferred embodiments and examples thereof, it is to be understood that these examples are intended in an illustrative rather than in a limiting sense. It is contemplated that modifications and combinations will occur to those skilled in the art, which modifications and combinations will be within the spirit of the invention and the scope of the following claims.

Claims (20)

1. An image display system comprising:
a display;
at least one first information extraction device for obtaining the visual field range of the user on the display;
at least one second information extraction device for extracting an external image of the other side of the display relative to the user, the external image having a first region and a second region;
at least one sensor for sensing the attitude of the vehicle; and
and the processing device is used for generating at least one image and displaying the at least one image on the display according to the visual field range, the image change rate of the first area, the image change rate of the second area and the posture of the carrier.
2. The image display system according to claim 1, wherein the at least one second information extraction device or the processing device calculates the image change rate of the first area and the image change rate of the second area according to a similarity degree of colors, brightness and/or shapes of the first area and the second area, the similarity degree being low corresponding to the image change rate being high, the image change rate of the first area being higher than the image change rate of the second area, and the processing device displays the at least one image on a portion of the display corresponding to the first area.
3. The image display system of claim 1, wherein the processing device displays the at least one image within the field of view of the user.
4. The image display system of claim 3, wherein the field of view includes a main field of view region and a sub-field of view region, the processing device displaying the at least one image in a region outside the main field of view region.
5. The image display system of claim 1, wherein the processing device adjusts a display position and/or a display angle of the at least one image on the display according to the pose of the vehicle.
6. The image display system of claim 1, wherein the at least one sensor further pre-estimates a subsequent temporal pose of the vehicle, the processing device generates a prompt based on the subsequent temporal pose and displays the prompt on the display.
7. The image display system of claim 1, further comprising:
a positioning device for positioning the geographic coordinates of the user or the vehicle;
the processing device also judges the field where the user or the carrier is located according to the geographic coordinate, and selects the type of the at least one image according to the field.
8. The image display system of claim 1, wherein the at least one second information extraction device or the processing device further determines whether the external image has at least one reference mark, if not, the processing device displays the at least one image on the display, and if so, the user determines whether to display the at least one image on the display.
9. The image display system of claim 8, wherein the external image comprises at least one object, and the at least one second information extraction device selects the at least one object as the at least one reference object according to the sharpness, brightness, and dwell time of the at least one object.
10. The image display system of claim 1, wherein the display is disposed in a single window or a ring cabin on at least one window and/or at least one windshield of the vehicle.
11. An image display method is applied to an image display system, and comprises the following steps:
acquiring a visual field range of a user on a display of the image display system;
extracting an external image of the other side of the display relative to the user, wherein the external image is provided with a first area and a second area;
calculating the image change rate of the first area and the image change rate of the second area;
sensing the posture of the carrier; and
displaying at least one image on the display according to the visual field range, the image change rate of the first area, the image change rate of the second area and the posture of the carrier.
12. The image display method of claim 11, wherein the step of calculating the image change rate of the first area and the image change rate of the second area further comprises: calculating the image change rate of the first region according to the similarity of the color, brightness and/or shape of the first region, and calculating the image change rate of the second region according to the similarity of the color, brightness and/or shape of the second region, the similarity being low corresponding to the image change rate being high, the image change rate of the first region being higher than the image change rate of the second region, and the image display method further comprises: and displaying the at least one image on a part of the display corresponding to the first area.
13. The image display method according to claim 11, wherein the at least one image is displayed within the field of view of the user.
14. The image display method according to claim 13, wherein the at least one image is displayed in a region outside a main field of view region of the field of view.
15. The image displaying method according to claim 11, wherein a display position and/or a display angle of the at least one image on the display is adjusted according to the pose of the vehicle.
16. The image display method according to claim 11, further comprising:
locating geographic coordinates of the user or the vehicle;
judging the field where the user or the carrier is located according to the geographic coordinates; and
selecting the type of the at least one image according to the field.
17. The image display method according to claim 11, further comprising:
judging whether the external image has at least one reference target;
if not, displaying the at least one image on the display; and
if so, the user determines whether to display the at least one image on the display.
18. The image display method according to claim 17, wherein the external image includes at least one object, the image display method further comprising:
analyzing the definition, brightness and residence time of the at least one object; and
and selecting the at least one object as the at least one reference target according to the definition, the brightness and the staying time of the at least one object.
19. The image displaying method according to claim 11, wherein the display is disposed on at least one window and/or at least one windshield of the vehicle in a single window form or a ring cabin form.
20. The image display method according to claim 11, further comprising:
estimating the subsequent time attitude of the carrier in advance;
generating prompt information according to the subsequent time posture; and
and displaying the prompt information on the display.
CN202111114413.9A 2020-11-20 2021-09-23 Image display system and image display method for preventing motion sickness Pending CN114518856A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202063116166P 2020-11-20 2020-11-20
US63/116,166 2020-11-20
TW110133053A TWI790738B (en) 2020-11-20 2021-09-06 Image display system for preventing motion sick and image display method thereof
TW110133053 2021-09-06

Publications (1)

Publication Number Publication Date
CN114518856A true CN114518856A (en) 2022-05-20

Family

ID=81595146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111114413.9A Pending CN114518856A (en) 2020-11-20 2021-09-23 Image display system and image display method for preventing motion sickness

Country Status (1)

Country Link
CN (1) CN114518856A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007004525A (en) * 2005-06-24 2007-01-11 National Institute Of Advanced Industrial & Technology Information presentation system for reducing motion sickness and presentation method
KR20120112003A (en) * 2012-02-24 2012-10-11 (주)브랜드스토리 Vehicle for sightseeing provided with transparent display and method for guiding sightseeing using the same
CN109842790A (en) * 2017-11-29 2019-06-04 财团法人工业技术研究院 Image information display methods and display
CN110509851A (en) * 2019-08-09 2019-11-29 上海豫兴电子科技有限公司 A kind of multi-curvature electronics rearview mirror of servo-actuated display
WO2020017139A1 (en) * 2018-07-19 2020-01-23 株式会社アルファコード Virtual-space-image providing device and program for providing virtual space image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007004525A (en) * 2005-06-24 2007-01-11 National Institute Of Advanced Industrial & Technology Information presentation system for reducing motion sickness and presentation method
KR20120112003A (en) * 2012-02-24 2012-10-11 (주)브랜드스토리 Vehicle for sightseeing provided with transparent display and method for guiding sightseeing using the same
CN109842790A (en) * 2017-11-29 2019-06-04 财团法人工业技术研究院 Image information display methods and display
WO2020017139A1 (en) * 2018-07-19 2020-01-23 株式会社アルファコード Virtual-space-image providing device and program for providing virtual space image
CN110509851A (en) * 2019-08-09 2019-11-29 上海豫兴电子科技有限公司 A kind of multi-curvature electronics rearview mirror of servo-actuated display

Similar Documents

Publication Publication Date Title
CN109427199B (en) Augmented reality method and device for driving assistance
CN108515909B (en) Automobile head-up display system and obstacle prompting method thereof
US8754760B2 (en) Methods and apparatuses for informing an occupant of a vehicle of surroundings of the vehicle
JP4899340B2 (en) Driving sense adjustment device and driving sense adjustment method
US11022795B2 (en) Vehicle display control device
JP6136565B2 (en) Vehicle display device
JP2008001182A (en) Vehicular visual information presenting device and method
TWI790738B (en) Image display system for preventing motion sick and image display method thereof
CN108375830A (en) head-up display device and display control method
EP3811326B1 (en) Heads up display (hud) content control system and methodologies
JP6669053B2 (en) Head-up display system
JP5802926B2 (en) Luminance control device for in-vehicle display device, luminance control program, and luminance control method
JP5948170B2 (en) Information display device, information display method, and program
JP2004030212A (en) Information providing apparatus for vehicle
US20080037828A1 (en) Apparatus and Method for Displaying Image of View in Front of Vehicle
JP3484899B2 (en) In-vehicle image display device
JP2006347451A (en) Device and method for presenting visual information
CN114518856A (en) Image display system and image display method for preventing motion sickness
JP2011157066A (en) Operation feeling adjusting device
JP7168091B2 (en) Image processing device and image processing method
JP7276083B2 (en) Driver state estimation device
JP7322667B2 (en) Driver state estimation device
JP7342635B2 (en) Driver state estimation device and driver state estimation method
CN118151390A (en) Method, device and system for adjusting display content of head-up display
CN117334143A (en) AR-HUD virtual image self-adaptive adjustment method based on driver eye position and road condition information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination