WO2023286835A1 - Gaze guidance device, gaze guidance method, gaze guidance program, and storage medium - Google Patents

Gaze guidance device, gaze guidance method, gaze guidance program, and storage medium Download PDF

Info

Publication number
WO2023286835A1
WO2023286835A1 PCT/JP2022/027707 JP2022027707W WO2023286835A1 WO 2023286835 A1 WO2023286835 A1 WO 2023286835A1 JP 2022027707 W JP2022027707 W JP 2022027707W WO 2023286835 A1 WO2023286835 A1 WO 2023286835A1
Authority
WO
WIPO (PCT)
Prior art keywords
order
line
sight
guidance
ease
Prior art date
Application number
PCT/JP2022/027707
Other languages
French (fr)
Japanese (ja)
Inventor
俊明 井上
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to JP2023534856A priority Critical patent/JPWO2023286835A1/ja
Publication of WO2023286835A1 publication Critical patent/WO2023286835A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to a gaze guidance device, a gaze guidance method, a gaze guidance program, and a storage medium.
  • the line-of-sight movement route estimated from an image of the front of the vehicle may not match the line-of-sight movement route for driving the vehicle safely.
  • the present invention has been made in view of the above, and aims to provide a line-of-sight guidance device, a line-of-sight guidance method, a line-of-sight guidance program, and a storage medium that can guide the line of sight.
  • the line-of-sight guidance device is a first region in which a plurality of regions in an image determined based on the degree of ease of gathering of the line of sight are arranged in descending order of the peak value of the degree of ease of gathering of the line of sight.
  • a calculation unit for calculating the order of the a determination unit for determining whether or not the designated second order and the first order are the same, and the determination unit determines the first order and the second and a guiding unit that guides the line of sight according to the second order when it is determined that the order of the two is different.
  • a visual guidance method is a visual guidance method executed by a computer, wherein a plurality of areas in an image determined based on the degree of ease of gathering of the line of sight is determined by the ease of gathering of the line of sight. a calculating step of calculating a first order arranged in descending order of the peak value of the degree of; a determining step of determining whether the specified second order and the first order are the same; and a guidance step of guiding a line of sight according to the second order when the determination step determines that the first order and the second order are different.
  • a line-of-sight guidance program wherein a plurality of regions in an image, which are determined based on the degree of ease of line-of-sight gathering, are arranged in descending order of the peak value of the degree of ease of line-of-sight gathering.
  • the plurality of regions in the image determined based on the degree of ease of line-of-sight gathering are arranged in descending order of the peak value of the degree of ease of line-of-sight gathering.
  • FIG. 1 is a diagram showing an overview of the line-of-sight guidance process.
  • FIG. 2 is a diagram illustrating a configuration example of a visual guidance device.
  • FIG. 3 is a diagram illustrating visual salience.
  • FIG. 4 is a diagram illustrating an example of calculation of the order of line of sight.
  • FIG. 5 is a diagram showing an example of a line-of-sight guidance method.
  • FIG. 6 is a diagram showing an example of a line-of-sight guidance method.
  • FIG. 7 is a diagram showing an example of a line-of-sight guidance method.
  • FIG. 8 is a flow chart showing the processing flow of the visual guidance device.
  • the visual line guidance device calculates the movement order of the line of sight based on the image, and performs guidance so that the movement order becomes the designated order.
  • FIG. 1 is a diagram showing an overview of the line-of-sight guidance process. As shown in FIG. 1, the visual guidance device 10 is provided in a vehicle V. As shown in FIG. 1, the visual guidance device 10 is provided in a vehicle V. As shown in FIG. 1, the visual guidance device 10 is provided in a vehicle V. As shown in FIG. 1, the visual guidance device 10 is provided in a vehicle V. As shown in FIG. 1, the visual guidance device 10 is provided in a vehicle V.
  • the line-of-sight guidance device 10 may be an in-vehicle device such as a drive recorder and a car navigation system, or an information processing device such as a personal computer and a server device.
  • each block may be appropriately distributed, such as the calculation unit 131 and the determination unit 132 existing on the server, and the guidance unit 133 existing on the terminal of the vehicle. is not limited to
  • the visual guidance device 10 first captures an image in front of the vehicle V (step S1).
  • the gaze guidance device 10 calculates the order of gaze based on visual salience from the image (step S2). A method of calculating the order of sight lines will be described later.
  • step S3 the visual line guidance device 10 guides the line of sight according to the designated order.
  • the line-of-sight guidance device 10 can guide the line of sight by outputting an image or sound.
  • FIG. 2 is a diagram showing a configuration example of a visual guidance device.
  • the visual guidance device 10 has an interface section 11 , a storage section 12 and a control section 13 .
  • the interface unit 11 is an interface for inputting and outputting data. Also, the interface unit 11 may be a communication module capable of data communication with another device via a communication network such as the Internet.
  • the interface unit 11 is connected to the camera 20 and the output device 30.
  • the camera 20 is provided in the vehicle V and photographs the surroundings or inside of the vehicle V.
  • the output device 30 is a display that displays images or a speaker that outputs audio.
  • the output device 30 may be a see-through projection with a transparent screen and a projector.
  • the transparent screen may be the front window of the vehicle.
  • the output device 30 may be AR (Augmented Reality) goggles that allow viewing of real scenery and CG at the same time.
  • the output device 30 may be any device capable of displaying an object superimposed on the scenery, such as a head-up display, an MR (Mixed Reality) device, or a mirror device having a transmissive display.
  • the storage unit 12 stores various programs executed by the visual guidance device 10, data necessary for execution of processing, and the like.
  • the storage unit 12 stores model information 121.
  • the model information 121 is parameters such as weights for constructing a neural network for calculating visual saliency.
  • the control unit 13 is realized by executing various programs stored in the storage unit 12 by a controller such as a CPU (Central Processing Unit) or MPU (Micro Processing Unit), and controls the operation of the entire visual guidance device 10. do.
  • a controller such as a CPU (Central Processing Unit) or MPU (Micro Processing Unit)
  • control unit 13 is not limited to a CPU or MPU, and may be implemented by an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array).
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the control unit 13 has a calculation unit 131 , a determination unit 132 and a guidance unit 133 .
  • the calculation unit 131 calculates the order in which the plurality of regions in the image, which are determined based on the degree of ease of line-of-sight gathering, are arranged in descending order of the peak value of the degree of ease of line-of-sight gathering.
  • the order calculated by the calculation unit 131 is an example of the first order.
  • the degree of easiness of attracting lines of sight is, for example, visual saliency.
  • FIG. 3 is a diagram illustrating visual salience.
  • visual saliency is an index obtained by estimating the position of the line of sight of the driver for an image showing the front of the vehicle (see, for example, Patent Document 1).
  • Visual salience may be calculated by inputting an image into a neural network.
  • the neural network is trained on a large number of images taken in a wide range of fields and the gaze information of multiple subjects who actually saw them.
  • Visual salience is, for example, an 8-bit (0 to 255) value given to each pixel of an image, and is expressed as a value that increases as the probability of being the position of the driver's line of sight increases.
  • a saliency map can be obtained by arranging (mapping) the visual saliency to the corresponding pixels of the image.
  • a saliency map can be said to be data obtained by mapping the degree of easiness of eye-gazing for each pixel in an image.
  • multiple regions can be defined by dividing the saliency map, which is a plane corresponding to the image. At this time, each region has a point at which the visual salience peaks.
  • the method of division is not particularly limited, it may be based on object regions detected using a single shot multiple detector (SSD), or may be based on a clustering method such as k-means. may Alternatively, the point having the peak value may be directly detected by scanning and comparing the saliency map pixel by pixel without segmentation.
  • the peak value and area can be defined by any method.
  • the eye guidance device 10 identifies the point of maximum visual salience in the entire saliency map. Then, the visual guidance device 10 further identifies the point with the maximum visual salience from the area excluding the area within the predetermined range centered on the identified point. The visual guidance device 10 can regard the specified points as points having peak values by repeating this process.
  • FIG. 4 is a diagram showing an example of calculating the order of line of sight.
  • the calculation unit 131 first calculates a saliency map 200a from the image 200.
  • a region 201, a region 202, a region 203, and a region 204 are defined in the saliency map 200a.
  • Each of the regions 201, 202, 203, and 204 is a region of a predetermined size including the point where the visual saliency takes a peak value.
  • the region 201 is the region containing the points with the maximum peak values in the saliency map 200a.
  • the calculation unit 131 calculates the order of the line of sight in the image 200 as (A) ⁇ (B) ⁇ (C) ⁇ (D).
  • (A), (B), (C), and (D) are points corresponding to regions 201, 202, 203, and 204 when the saliency map 200a is superimposed on the image 200, respectively.
  • the determination unit 132 determines whether or not the designated designation order and the order calculated by the calculation unit 131 (hereinafter referred to as calculation order) are the same.
  • the specified order is an example of a second order.
  • the designated order is the order of the driver's line of sight that improves safety.
  • the designated order may be determined manually based on knowledge, or may be determined automatically by image recognition or the like.
  • FIG. 5 is a diagram showing an example of a line-of-sight guidance method. As shown in FIG. 5, it is assumed that the designation order for the image 200 is (A) ⁇ (D) ⁇ (B) ⁇ (C). At this time, the determination unit 132 determines that the calculation order and the designation order are different.
  • the guidance unit 133 guides the line of sight according to the designation order.
  • the guiding unit 133 edits (D) in the image 200 by combining, transforming, or the like.
  • the guidance unit 133 outputs an image edited to increase the visual saliency of the region (region including (D)) ranked higher in the designation order than in the calculation order.
  • the guidance unit 133 edits the image by changing pixel values or by combining processing. Specifically, the guiding unit 133 locally changes the brightness of the target area, changes the color temperature of the target area compared to the surroundings, synthesizes another image with the target area, and the like. Edit by
  • the guidance unit 133 causes the output device 30 to display an object for guiding the line of sight to an area with a higher rank in the designation order than the rank in the calculation order, in such a manner that the user can view it simultaneously with the scenery.
  • the calculation unit 131 calculates the order of calculation for the landscape image in the user's field of view. Also, the user here is assumed to be the driver of the vehicle V. FIG.
  • the output device 30 is, for example, a display device with a transparent display. As shown in FIG. 6 , the guidance unit 133 causes the output device 30 to display the object 311 .
  • the user can simultaneously view the scenery 310 and the object 311 displayed by the guidance unit 133 through the output device 30.
  • the guidance unit 133 may display an image in which the scenery 310 and the object 311 are superimposed.
  • the object 311 is for increasing the line-of-sight rank of the bicycle, which is a traffic participant to which attention should be paid with priority.
  • the output device 30 may be a mirror having an image display function.
  • FIG. 7 is a diagram showing an example of a line-of-sight guidance method.
  • the guidance unit 133 causes the output device 30 to display the object 321.
  • the guidance unit 133 causes the output device 30 to display the object 321.
  • the user can simultaneously view the scenery 320 and the object 321 displayed by the guidance unit 133 via the output device 30.
  • the object 321 is for raising the line-of-sight rank of the following vehicle, which is a traffic participant that should be given priority attention.
  • FIG. 8 is a flowchart showing the processing flow of the visual guidance device. As shown in FIG. 5, first, the camera 20 provided on the vehicle V captures an image (step S101).
  • the gaze guidance device 10 calculates the gaze order based on visual salience from the image (step S102).
  • the visual guidance device 10 determines whether or not the calculated order matches the specified order (step S103).
  • step S104 the visual guidance device 10 guides the line of sight according to the designated order.
  • step S103 If the calculated order matches the specified order (step S103, Yes), the visual guidance device 10 outputs the captured image as it is (step S106). In this case, the visual guidance device 10 does not have to output an image.
  • the calculation unit 131 of the visual line guidance device 10 determines a plurality of regions in the image determined based on the degree of ease with which the line of sight gathers. Compute a first order in descending order.
  • the determination unit 132 determines whether the specified second order and the first order are the same. When the determining unit 132 determines that the first order and the second order are different, the guiding unit 133 guides the line of sight according to the second order.
  • the determination unit 132 determines whether or not the second order, which is designated so that the order of the specific object is higher, is the same as the first order.
  • the guidance unit 133 guides the line of sight by outputting an image edited to increase the visual salience of the regions ranked higher in the second order than in the first order.
  • the line of sight of the vehicle driver can be guided to a specific traffic participant, improving safety.
  • the guidance unit 133 edits the image by changing the pixel value or by combining processing. This makes it possible to easily guide the line of sight.
  • the calculation unit 131 calculates the first order for the landscape image in the user's field of view.
  • the guidance unit 133 displays an object for guiding the line of sight to an area ranked higher in the second order than in the first order on a predetermined display device in a manner that allows the user to view the object simultaneously with the scenery.
  • the determination unit 132 classifies images into classes corresponding to the output obtained by inputting the saliency map and the feature amount output from the encoder to a CNN having an output layer corresponding to each class.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Navigation (AREA)

Abstract

A calculation unit (131) of this gaze guidance device (10) calculates a first order in which a plurality of areas in an image, determined on the basis of the degree of ease in focusing on a gaze, are arranged in order from the largest peak value of the degree of ease in focusing on the gaze. A determination unit (132) determines whether a designated second order and the first order are the same. A guidance unit (133) guides a gaze that is based on the second order when the determination unit (132) determines that the first order and the second order differ from each other.

Description

視線誘導装置、視線誘導方法、視線誘導プログラム及び記憶媒体Gaze Guidance Device, Gaze Guidance Method, Gaze Guidance Program, and Storage Medium
 本発明は、視線誘導装置、視線誘導方法、視線誘導プログラム及び記憶媒体に関する。 The present invention relates to a gaze guidance device, a gaze guidance method, a gaze guidance program, and a storage medium.
 従来、画像から計算した視覚的顕著性を基に、注視点の移動に関する情報を推定する技術が知られている。 Conventionally, there is known a technique for estimating information related to movement of the gaze point based on visual saliency calculated from images.
特開2021-77248号公報Japanese Unexamined Patent Application Publication No. 2021-77248
 しかしながら、従来の技術には、視線を誘導することが難しい場合があるという問題がある。 However, conventional technology has the problem that it is sometimes difficult to guide the line of sight.
 例えば、車両の前方を写した画像から推定される視線の移動経路と、安全性に車両を運転するための視線の移動経路とは合致しない場合がある。 For example, the line-of-sight movement route estimated from an image of the front of the vehicle may not match the line-of-sight movement route for driving the vehicle safely.
 例えば、車両の運転者が安全な運転を行うためには、人目を引く色や形状の建造物よりも、目立たない歩行者を優先して見る方が望ましい場合がある。 For example, in order for a vehicle driver to drive safely, it may be desirable to give priority to inconspicuous pedestrians over buildings with eye-catching colors and shapes.
 本発明は、上記に鑑みてなされたものであって、視線を誘導することができる視線誘導装置、視線誘導方法、視線誘導プログラム及び記憶媒体を提供することを目的とする。 The present invention has been made in view of the above, and aims to provide a line-of-sight guidance device, a line-of-sight guidance method, a line-of-sight guidance program, and a storage medium that can guide the line of sight.
 請求項1に記載の視線誘導装置は、視線の集まりやすさの度合いに基づいて決定される画像中の複数の領域を、前記視線の集まりやすさの度合いのピーク値が大きい順に並べた第1の順序を計算する計算部と、指定された第2の順序と前記第1の順序とが同じであるか否かを判定する判定部と、前記判定部によって前記第1の順序と前記第2の順序が異なると判定された場合、第2の順序に応じた視線の誘導を行う誘導部と、を有することを特徴とする。 The line-of-sight guidance device according to claim 1 is a first region in which a plurality of regions in an image determined based on the degree of ease of gathering of the line of sight are arranged in descending order of the peak value of the degree of ease of gathering of the line of sight. a calculation unit for calculating the order of the , a determination unit for determining whether or not the designated second order and the first order are the same, and the determination unit determines the first order and the second and a guiding unit that guides the line of sight according to the second order when it is determined that the order of the two is different.
 請求項6に記載の視線誘導方法は、コンピュータによって実行される視線誘導方法であって、視線の集まりやすさの度合いに基づいて決定される画像中の複数の領域を、前記視線の集まりやすさの度合いのピーク値が大きい順に並べた第1の順序を計算する計算ステップと、指定された第2の順序と前記第1の順序とが同じであるか否かを判定する判定ステップと、前記判定ステップによって前記第1の順序と前記第2の順序が異なると判定された場合、第2の順序に応じた視線の誘導を行う誘導ステップと、を含むことを特徴とする。 A visual guidance method according to claim 6 is a visual guidance method executed by a computer, wherein a plurality of areas in an image determined based on the degree of ease of gathering of the line of sight is determined by the ease of gathering of the line of sight. a calculating step of calculating a first order arranged in descending order of the peak value of the degree of; a determining step of determining whether the specified second order and the first order are the same; and a guidance step of guiding a line of sight according to the second order when the determination step determines that the first order and the second order are different.
 請求項7に記載の視線誘導プログラムは、視線の集まりやすさの度合いに基づいて決定される画像中の複数の領域を、前記視線の集まりやすさの度合いのピーク値が大きい順に並べた第1の順序を計算する計算ステップと、指定された第2の順序と前記第1の順序とが同じであるか否かを判定する判定ステップと、前記判定ステップによって前記第1の順序と前記第2の順序が異なると判定された場合、第2の順序に応じた視線の誘導を行う誘導ステップと、をコンピュータに実行させる。 According to a seventh aspect of the present invention, there is provided a line-of-sight guidance program, wherein a plurality of regions in an image, which are determined based on the degree of ease of line-of-sight gathering, are arranged in descending order of the peak value of the degree of ease of line-of-sight gathering. a calculating step of calculating the order of; a determining step of determining whether or not the specified second order and the first order are the same; and the first order and the second If it is determined that the order of is different, the guidance step of guiding the line of sight according to the second order is executed by the computer.
 請求項8に記載の記憶媒体は、視線の集まりやすさの度合いに基づいて決定される画像中の複数の領域を、前記視線の集まりやすさの度合いのピーク値が大きい順に並べた第1の順序を計算する計算ステップと、指定された第2の順序と前記第1の順序とが同じであるか否かを判定する判定ステップと、前記判定ステップによって前記第1の順序と前記第2の順序が異なると判定された場合、第2の順序に応じた視線の誘導を行う誘導ステップと、をコンピュータに実行させるための視線誘導プログラムを記憶したことを特徴とする。 In the storage medium according to claim 8, the plurality of regions in the image determined based on the degree of ease of line-of-sight gathering are arranged in descending order of the peak value of the degree of ease of line-of-sight gathering. a calculating step of calculating an order; a determining step of determining whether the designated second order and the first order are the same; and a guidance step of guiding the line of sight in accordance with the second order when the order is determined to be different.
図1は、視線誘導処理の概要を示す図である。FIG. 1 is a diagram showing an overview of the line-of-sight guidance process. 図2は、視線誘導装置の構成例を示す図である。FIG. 2 is a diagram illustrating a configuration example of a visual guidance device. 図3は、視覚的顕著性を説明する図である。FIG. 3 is a diagram illustrating visual salience. 図4は、視線の順序の計算例を示す図である。FIG. 4 is a diagram illustrating an example of calculation of the order of line of sight. 図5は、視線の誘導方法の例を示す図である。FIG. 5 is a diagram showing an example of a line-of-sight guidance method. 図6は、視線の誘導方法の例を示す図である。FIG. 6 is a diagram showing an example of a line-of-sight guidance method. 図7は、視線の誘導方法の例を示す図である。FIG. 7 is a diagram showing an example of a line-of-sight guidance method. 図8は、視線誘導装置の処理の流れを示すフローチャートである。FIG. 8 is a flow chart showing the processing flow of the visual guidance device.
 以下に、図面を参照しつつ、本発明を実施するための形態(以下、実施形態)について説明する。なお、以下に説明する実施形態によって本発明が限定されるものではない。さらに、図面の記載において、同一の部分には同一の符号を付している。 Below, a mode for carrying out the present invention (hereinafter referred to as an embodiment) will be described with reference to the drawings. In addition, this invention is not limited by embodiment described below. Furthermore, in the description of the drawings, the same parts are given the same reference numerals.
[第1の実施形態]
 第1の実施形態に係る視線誘導装置は、画像に基づく視線の移動順序を計算し、当該移動順序が指定された順序になるように誘導を行う。
[First embodiment]
The visual line guidance device according to the first embodiment calculates the movement order of the line of sight based on the image, and performs guidance so that the movement order becomes the designated order.
 図1は、視線誘導処理の概要を示す図である。図1に示すように、視線誘導装置10は車両Vに備えられる。 FIG. 1 is a diagram showing an overview of the line-of-sight guidance process. As shown in FIG. 1, the visual guidance device 10 is provided in a vehicle V. As shown in FIG.
 視線誘導装置10は、ドライブレコーダ及びカーナビゲーションシステム等の車載装置であってもよいし、パーソナルコンピュータ及びサーバ装置のような情報処理装置であってもよい。また、計算部131や判定部132がサーバ上に存在し、誘導部133が車両の端末に存在するなど、適宜各ブロックが分散されていてもよく、必ずしも1つの装置に一体となっていることを限定するものではない。 The line-of-sight guidance device 10 may be an in-vehicle device such as a drive recorder and a car navigation system, or an information processing device such as a personal computer and a server device. In addition, each block may be appropriately distributed, such as the calculation unit 131 and the determination unit 132 existing on the server, and the guidance unit 133 existing on the terminal of the vehicle. is not limited to
 図1に示すように、まず、視線誘導装置10は、車両Vの前方の画像を撮影する(ステップS1)。 As shown in FIG. 1, the visual guidance device 10 first captures an image in front of the vehicle V (step S1).
 次に、視線誘導装置10は、画像から視覚的顕著性に基づく視線の順序を計算する(ステップS2)。視線の順序の計算方法については後述する。 Next, the gaze guidance device 10 calculates the order of gaze based on visual salience from the image (step S2). A method of calculating the order of sight lines will be described later.
 そして、視線誘導装置10は、ステップS2で計算した視線の順序が指定順序と異なる場合、指定順序に従って視線を誘導する(ステップS3)。 Then, if the order of the line of sight calculated in step S2 is different from the designated order, the visual line guidance device 10 guides the line of sight according to the designated order (step S3).
 例えば、視線誘導装置10は、画像又は音声の出力によって視線を誘導することができる。 For example, the line-of-sight guidance device 10 can guide the line of sight by outputting an image or sound.
 図2は、視線誘導装置の構成例を示す図である。図2に示すように、視線誘導装置10は、インタフェース部11、記憶部12及び制御部13を有する。 FIG. 2 is a diagram showing a configuration example of a visual guidance device. As shown in FIG. 2 , the visual guidance device 10 has an interface section 11 , a storage section 12 and a control section 13 .
 インタフェース部11は、データの入出力を行うためのインタフェースである。また、インタフェース部11は、インターネット等の通信ネットワークを介して他の装置との間でデータ通信が可能な通信モジュールであってもよい。 The interface unit 11 is an interface for inputting and outputting data. Also, the interface unit 11 may be a communication module capable of data communication with another device via a communication network such as the Internet.
 図2の例では、インタフェース部11は、カメラ20及び出力装置30と接続されている。カメラ20は、車両Vに備えられ、車両Vの周囲又は内部を撮影する。出力装置30は、画像を表示するディスプレイ又は音声を出力するスピーカである。 In the example of FIG. 2, the interface unit 11 is connected to the camera 20 and the output device 30. The camera 20 is provided in the vehicle V and photographs the surroundings or inside of the vehicle V. As shown in FIG. The output device 30 is a display that displays images or a speaker that outputs audio.
 また、出力装置30は、透明スクリーンとプロジェクタを備えたシースループロジェクションであってもよい。透明スクリーンは車両のフロントウィンドウであってもよい。また、出力装置30は、現実の風景とCGを同時に見ることが可能なAR(Augmented Reality)ゴーグルであってもよい。また、出力装置30は、ヘッドアップディスプレイやMR(Mixed Reality)デバイス、透過ディスプレイを有するミラーデバイスなど、風景に重畳してオブジェクトを表示可能なデバイスであればよい。 Also, the output device 30 may be a see-through projection with a transparent screen and a projector. The transparent screen may be the front window of the vehicle. Also, the output device 30 may be AR (Augmented Reality) goggles that allow viewing of real scenery and CG at the same time. Also, the output device 30 may be any device capable of displaying an object superimposed on the scenery, such as a head-up display, an MR (Mixed Reality) device, or a mirror device having a transmissive display.
 記憶部12は、視線誘導装置10で実行される各種のプログラム、及び処理の実行に必要なデータ等を記憶する。 The storage unit 12 stores various programs executed by the visual guidance device 10, data necessary for execution of processing, and the like.
 記憶部12は、モデル情報121を記憶する。モデル情報121は、視覚的顕著性の計算を行うニューラルネットワークを構築するための重み等のパラメータである。 The storage unit 12 stores model information 121. The model information 121 is parameters such as weights for constructing a neural network for calculating visual saliency.
 制御部13は、CPU(Central Processing Unit)やMPU(Micro Processing Unit)等のコントローラによって、記憶部12に記憶された各種プログラムが実行されることにより実現され、視線誘導装置10全体の動作を制御する。 The control unit 13 is realized by executing various programs stored in the storage unit 12 by a controller such as a CPU (Central Processing Unit) or MPU (Micro Processing Unit), and controls the operation of the entire visual guidance device 10. do.
 なお、制御部13は、CPUやMPUに限らず、ASIC(Application Specific Integrated Circuit)やFPGA(Field Programmable Gate Array)等の集積回路によって実現されてもよい。 Note that the control unit 13 is not limited to a CPU or MPU, and may be implemented by an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array).
 制御部13は、計算部131、判定部132及び誘導部133を有する。 The control unit 13 has a calculation unit 131 , a determination unit 132 and a guidance unit 133 .
 計算部131は、視線の集まりやすさの度合いに基づいて決定される画像中の複数の領域を、視線の集まりやすさの度合いのピーク値が大きい順に並べた順序を計算する。 The calculation unit 131 calculates the order in which the plurality of regions in the image, which are determined based on the degree of ease of line-of-sight gathering, are arranged in descending order of the peak value of the degree of ease of line-of-sight gathering.
 ここで、計算部131によって計算される順序は、第1の順序の一例である。また、視線の集まりやすさの度合いは、例えば視覚的顕著性である。 Here, the order calculated by the calculation unit 131 is an example of the first order. Also, the degree of easiness of attracting lines of sight is, for example, visual saliency.
 図3を用いて、視覚的顕著性について説明する。図3は、視覚的顕著性を説明する図である。図3に示すように、視覚的顕著性は、車両の前方を写した画像について、運転者の視線の位置を推定して得られる指標である(例えば、特許文献1を参照)。 Visual salience will be explained using FIG. FIG. 3 is a diagram illustrating visual salience. As shown in FIG. 3, visual saliency is an index obtained by estimating the position of the line of sight of the driver for an image showing the front of the vehicle (see, for example, Patent Document 1).
 視覚的顕著性は、ニューラルネットワークに画像を入力することで演算されるものであってもよい。例えば、当該ニューラルネットワークは、広範な分野で写した大量の画像と、それらを実際に見た複数の被験者の視線情報とを基に訓練される。 Visual salience may be calculated by inputting an image into a neural network. For example, the neural network is trained on a large number of images taken in a wide range of fields and the gaze information of multiple subjects who actually saw them.
 視覚的顕著性は、例えば画像の各画素に与えられる8bit(0~255)の値であって、運転者の視線の位置である確率が大きいほど大きくなる値として表される。 Visual salience is, for example, an 8-bit (0 to 255) value given to each pixel of an image, and is expressed as a value that increases as the probability of being the position of the driver's line of sight increases.
 そのため、図3のように、視覚的顕著性を画像の対応する画素に配置(マッピング)することで、顕著性マップを得ることができる。顕著性マップは、画像における画素ごとの視線の集まりやすさの度合いをマッピングしたデータということができる。 Therefore, as shown in Fig. 3, a saliency map can be obtained by arranging (mapping) the visual saliency to the corresponding pixels of the image. A saliency map can be said to be data obtained by mapping the degree of easiness of eye-gazing for each pixel in an image.
 ここで、画像に対応する平面である顕著性マップを分割すること等により、複数の領域(塊)を定義することができる。このとき、各領域には、視覚的顕著性がピーク値を取る点が存在する。なお、分割方法については特に限定しないが、例えばSSD(Single shot multiple detector)等を用いて検出された物体領域に基づくものであってもよいし、k-meansなどのクラスタリング手法に基づくものであってもよい。あるいは領域分割を行わずに画素単位で顕著性マップを走査及び比較することにより直接的にピーク値をとる点を検出してもよい。 Here, multiple regions (clusters) can be defined by dividing the saliency map, which is a plane corresponding to the image. At this time, each region has a point at which the visual salience peaks. Although the method of division is not particularly limited, it may be based on object regions detected using a single shot multiple detector (SSD), or may be based on a clustering method such as k-means. may Alternatively, the point having the peak value may be directly detected by scanning and comparing the saliency map pixel by pixel without segmentation.
 ここで、計算された視覚的顕著性の領域の単位で、ピーク値の大きい領域から、ピーク値の小さい領域の順に視線が移動しやすいことが知られている(例えば、参考文献1を参照)。
 参考文献1:足立他,「視線計測を用いた顕著性マップにおけるトップダウン要因とボトムアップ要因の比較」, MIRU2017, PS2-19, 2017年.
Here, it is known that the line of sight tends to move in order from a region with a large peak value to a region with a small peak value in units of calculated visual saliency regions (see, for example, Reference 1). .
Reference 1: Adachi et al., ``Comparison of top-down and bottom-up factors in saliency maps using gaze measurement'', MIRU2017, PS2-19, 2017.
 なお、ピーク値及び領域は任意の方法で定義可能である。例えば、視線誘導装置10は、顕著性マップ全体において視覚的顕著性が最大となる点を特定する。そして、視線誘導装置10は、特定済みの点を中心とする所定の範囲の領域を除いた領域から、さらに視覚的顕著性が最大の点を特定する。視線誘導装置10は、これを繰り返すことによって特定した複数の点をピーク値を取る点とみなすことができる。 The peak value and area can be defined by any method. For example, the eye guidance device 10 identifies the point of maximum visual salience in the entire saliency map. Then, the visual guidance device 10 further identifies the point with the maximum visual salience from the area excluding the area within the predetermined range centered on the identified point. The visual guidance device 10 can regard the specified points as points having peak values by repeating this process.
 図4は、視線の順序の計算例を示す図である。図4に示すように、まず、計算部131は、画像200から顕著性マップ200aを計算する。ここで、顕著性マップ200aには、領域201、領域202、領域203、領域204が定義される。 FIG. 4 is a diagram showing an example of calculating the order of line of sight. As shown in FIG. 4, the calculation unit 131 first calculates a saliency map 200a from the image 200. As shown in FIG. Here, a region 201, a region 202, a region 203, and a region 204 are defined in the saliency map 200a.
 領域201、領域202、領域203、領域204は、いずれも視覚的顕著性がピーク値を取る点を含む所定の大きさの領域である。 Each of the regions 201, 202, 203, and 204 is a region of a predetermined size including the point where the visual saliency takes a peak value.
 ここで、各領域をピーク値の大きさの順に並べると、領域201、領域202、領域203、領域204となるものとする。例えば、領域201は、顕著性マップ200aにおいて最大のピーク値を取る点を含む領域である。 Here, when arranging the respective regions in the order of the magnitude of the peak value, the region 201, the region 202, the region 203, and the region 204 are obtained. For example, the region 201 is the region containing the points with the maximum peak values in the saliency map 200a.
 このとき、計算部131は、画像200における視線の順序を、(A)→(B)→(C)→(D)と計算する。(A)、(B)、(C)、(D)は、それぞれ顕著性マップ200aを画像200に重ね合わせたときに、領域201、領域202、領域203、領域204に対応する点である。 At this time, the calculation unit 131 calculates the order of the line of sight in the image 200 as (A)→(B)→(C)→(D). (A), (B), (C), and (D) are points corresponding to regions 201, 202, 203, and 204 when the saliency map 200a is superimposed on the image 200, respectively.
 判定部132は、指定された指定順序と計算部131によって計算された順序(以下、計算順序)とが同じであるか否かを判定する。指定された順序は、第2の順序の一例である。指定順序は、より安全性が向上するような運転者の視線の順序である。 The determination unit 132 determines whether or not the designated designation order and the order calculated by the calculation unit 131 (hereinafter referred to as calculation order) are the same. The specified order is an example of a second order. The designated order is the order of the driver's line of sight that improves safety.
 指定順序は、知見に基づいて手動で決定されるものであってもよいし、画像認識等により自動的に決定されるものであってもよい。 The designated order may be determined manually based on knowledge, or may be determined automatically by image recognition or the like.
 図5は、視線の誘導方法の例を示す図である。図5に示すように、画像200に対する指定順序は、(A)→(D)→(B)→(C)であったものとする。このとき、判定部132は、計算順序と指定順序が異なると判定する。 FIG. 5 is a diagram showing an example of a line-of-sight guidance method. As shown in FIG. 5, it is assumed that the designation order for the image 200 is (A)→(D)→(B)→(C). At this time, the determination unit 132 determines that the calculation order and the designation order are different.
 誘導部133は、判定部132によって計算順序と指定順序が異なると判定された場合、指定順序に応じた視線の誘導を行う。 When the determination unit 132 determines that the calculation order and the designation order are different, the guidance unit 133 guides the line of sight according to the designation order.
 例えば、図5の例では、(D)を含む領域の視覚的顕著性を増加させることで、計算順序を指定順序に近付けることができると考えられる。そこで、誘導部133は、画像200における(D)に対して、合成や変形等の編集を行う。 For example, in the example of FIG. 5, it is thought that increasing the visual saliency of the region containing (D) can bring the calculation order closer to the designated order. Therefore, the guiding unit 133 edits (D) in the image 200 by combining, transforming, or the like.
 このように、誘導部133は、計算順序における順位よりも指定順序における順位が高い領域((D)を含む領域)を、視覚的顕著性が大きくなるように編集した画像を出力することにより視線を誘導する。 In this way, the guidance unit 133 outputs an image edited to increase the visual saliency of the region (region including (D)) ranked higher in the designation order than in the calculation order. to induce
 例えば、誘導部133は、画素値を変化させること、又は合成処理により画像を編集する。具体的には、誘導部133は、対象の領域の輝度を局所的に変化させること、対象の領域の色温度を周辺と比べて変化させること、対象の領域に別の画像を合成すること等により編集を行う。 For example, the guidance unit 133 edits the image by changing pixel values or by combining processing. Specifically, the guiding unit 133 locally changes the brightness of the target area, changes the color temperature of the target area compared to the surroundings, synthesizes another image with the target area, and the like. Edit by
 さらに、誘導部133は、計算順序における順位よりも指定順序における順位が高い領域に視線を誘導するためのオブジェクトを、ユーザが風景と同時に見ることが可能な態様で出力装置30に表示させる。 Furthermore, the guidance unit 133 causes the output device 30 to display an object for guiding the line of sight to an area with a higher rank in the designation order than the rank in the calculation order, in such a manner that the user can view it simultaneously with the scenery.
 この場合、計算部131は、ユーザの視界の風景の画像について計算順序を計算するものとする。また、ここでのユーザは車両Vの運転者であるものとする。 In this case, the calculation unit 131 calculates the order of calculation for the landscape image in the user's field of view. Also, the user here is assumed to be the driver of the vehicle V. FIG.
 この場合、出力装置30は、例えば透明のディスプレイを持つ表示装置である。図6に示すように、誘導部133は、オブジェクト311を出力装置30に表示させる。 In this case, the output device 30 is, for example, a display device with a transparent display. As shown in FIG. 6 , the guidance unit 133 causes the output device 30 to display the object 311 .
 これにより、ユーザは、出力装置30を介して、風景310と、誘導部133によって表示されたオブジェクト311と、を同時に見ることができる。 As a result, the user can simultaneously view the scenery 310 and the object 311 displayed by the guidance unit 133 through the output device 30.
 また、誘導部133は、風景310とオブジェクト311とを重畳させた画像を表示させてもよい。オブジェクト311は、優先して注意すべき交通参加者である自転車についての、視線の順位を上げるためのものである。 Further, the guidance unit 133 may display an image in which the scenery 310 and the object 311 are superimposed. The object 311 is for increasing the line-of-sight rank of the bicycle, which is a traffic participant to which attention should be paid with priority.
 図7に示すように、出力装置30は画像表示機能を有するミラーであってもよい。図7は、視線の誘導方法の例を示す図である。 As shown in FIG. 7, the output device 30 may be a mirror having an image display function. FIG. 7 is a diagram showing an example of a line-of-sight guidance method.
 図7の例では、誘導部133は、誘導部133は、オブジェクト321を出力装置30に表示させる。 In the example of FIG. 7, the guidance unit 133 causes the output device 30 to display the object 321. In the example of FIG.
 これにより、ユーザは、出力装置30を介して、風景320と、誘導部133によって表示されたオブジェクト321と、を同時に見ることができる。 As a result, the user can simultaneously view the scenery 320 and the object 321 displayed by the guidance unit 133 via the output device 30.
 オブジェクト321は、優先して注意すべき交通参加者である後続車両についての、視線の順位を上げるためのものである。 The object 321 is for raising the line-of-sight rank of the following vehicle, which is a traffic participant that should be given priority attention.
 図8は、視線誘導装置の処理の流れを示すフローチャートである。図5に示すように、まず、車両Vに備えられたカメラ20は、画像を撮影する(ステップS101)。 FIG. 8 is a flowchart showing the processing flow of the visual guidance device. As shown in FIG. 5, first, the camera 20 provided on the vehicle V captures an image (step S101).
 次に、視線誘導装置10は、画像から視覚的顕著性に基づく視線の順序を計算する(ステップS102)。 Next, the gaze guidance device 10 calculates the gaze order based on visual salience from the image (step S102).
 ここで、視線誘導装置10は、計算した順序が指定順序と一致するか否かを判定する(ステップS103)。 Here, the visual guidance device 10 determines whether or not the calculated order matches the specified order (step S103).
 計算した順序が指定順序と一致しない場合(ステップS103、No)、視線誘導装置10は、指定順序に従って視線を誘導する(ステップS104)。 If the calculated order does not match the designated order (step S103, No), the visual guidance device 10 guides the line of sight according to the designated order (step S104).
 計算した順序が指定順序と一致する場合(ステップS103、Yes)、視線誘導装置10は、撮影した画像をそのまま出力する(ステップS106)。この場合、視線誘導装置10は、画像を出力しなくてもよい。 If the calculated order matches the specified order (step S103, Yes), the visual guidance device 10 outputs the captured image as it is (step S106). In this case, the visual guidance device 10 does not have to output an image.
[第1の実施形態の効果]
 これまで説明してきたように、視線誘導装置10の計算部131は、視線の集まりやすさの度合いに基づいて決定される画像中の複数の領域を、視線の集まりやすさの度合いのピーク値が大きい順に並べた第1の順序を計算する。判定部132は、指定された第2の順序と第1の順序とが同じであるか否かを判定する。誘導部133は、判定部132によって第1の順序と第2の順序が異なると判定された場合、第2の順序に応じた視線の誘導を行う。
[Effects of the first embodiment]
As described above, the calculation unit 131 of the visual line guidance device 10 determines a plurality of regions in the image determined based on the degree of ease with which the line of sight gathers. Compute a first order in descending order. The determination unit 132 determines whether the specified second order and the first order are the same. When the determining unit 132 determines that the first order and the second order are different, the guiding unit 133 guides the line of sight according to the second order.
 これにより、ユーザの視線を誘導することができる。また、例えば、車両の運転者の視線を誘導し、安全性を向上させることができる。 With this, it is possible to guide the user's line of sight. Also, for example, it is possible to guide the line of sight of the driver of the vehicle and improve safety.
 判定部132は、特定の物体の順位が高くなるように指定された第2の順序と第1の順序とが同じであるか否かを判定する。誘導部133は、第1の順序における順位よりも第2の順序における順位が高い領域を、視覚的顕著性が大きくなるように編集した画像を出力することにより視線を誘導する。 The determination unit 132 determines whether or not the second order, which is designated so that the order of the specific object is higher, is the same as the first order. The guidance unit 133 guides the line of sight by outputting an image edited to increase the visual salience of the regions ranked higher in the second order than in the first order.
 これにより、例えば、車両の運転者の視線を特定の交通参加者へ誘導し、安全性を向上させることができる。 As a result, for example, the line of sight of the vehicle driver can be guided to a specific traffic participant, improving safety.
 誘導部133は、画素値を変化させること、又は合成処理により画像を編集する。これにより、容易に視線を誘導することができる。 The guidance unit 133 edits the image by changing the pixel value or by combining processing. This makes it possible to easily guide the line of sight.
 計算部131は、ユーザの視界の風景の画像について第1の順序を計算する。誘導部133は、第1の順序における順位よりも第2の順序における順位が高い領域に視線を誘導するためのオブジェクトを、ユーザが風景と同時に見ることが可能な態様で所定の表示装置に表示させる。 The calculation unit 131 calculates the first order for the landscape image in the user's field of view. The guidance unit 133 displays an object for guiding the line of sight to an area ranked higher in the second order than in the first order on a predetermined display device in a manner that allows the user to view the object simultaneously with the scenery. Let
 これにより、例えば透明のディスプレイを使って、実際の風景と視線誘導のための仮想的なオブジェクトを同時に見ることができる。 With this, for example, using a transparent display, you can see the actual scenery and virtual objects for visual guidance at the same time.
 判定部132は、クラスのそれぞれに対応する出力層を持つCNNに、顕著性マップ及びエンコーダから出力された特徴量を入力して得られた出力に対応するクラスに、画像を分類する。 The determination unit 132 classifies images into classes corresponding to the output obtained by inputting the saliency map and the feature amount output from the encoder to a CNN having an output layer corresponding to each class.
 これにより、顕著性マップ推定における誤差の、クラス分類に対する影響を低減させ、分類精度を向上させることができる。 As a result, it is possible to reduce the impact of errors in saliency map estimation on class classification and improve classification accuracy.
 10 視線誘導装置
 11 インタフェース部
 12 記憶部
 13 制御部
 20 カメラ
 30 出力装置
 121 モデル情報
 131 計算部
 132 判定部
 133 誘導部
REFERENCE SIGNS LIST 10 visual guidance device 11 interface unit 12 storage unit 13 control unit 20 camera 30 output device 121 model information 131 calculation unit 132 determination unit 133 guidance unit

Claims (8)

  1.  視線の集まりやすさの度合いに基づいて決定される画像中の複数の領域を、前記視線の集まりやすさの度合いのピーク値が大きい順に並べた第1の順序を計算する計算部と、
     指定された第2の順序と前記第1の順序とが同じであるか否かを判定する判定部と、
     前記判定部によって前記第1の順序と前記第2の順序が異なると判定された場合、第2の順序に応じた視線の誘導を行う誘導部と、
     を有することを特徴とする視線誘導装置。
    a calculation unit that calculates a first order in which a plurality of regions in an image determined based on the degree of ease of line-of-sight gathering are arranged in descending order of the peak value of the degree of ease of line-of-sight gathering;
    a determination unit that determines whether the designated second order and the first order are the same;
    a guiding unit that guides a line of sight according to the second order when the determining unit determines that the first order and the second order are different;
    A line-of-sight guidance device comprising:
  2.  前記判定部は、特定の物体の順位が高くなるように指定された第2の順序と前記第1の順序とが同じであるか否かを判定することを特徴とする請求項1に記載の視線誘導装置。 2. The method according to claim 1, wherein the determination unit determines whether or not the first order is the same as the second order in which a specific object is ranked higher. line-of-sight device.
  3.  前記誘導部は、前記第1の順序における順位よりも前記第2の順序における順位が高い領域を、視覚的顕著性が大きくなるように編集した画像を出力することにより視線を誘導することを特徴とする請求項1又は2に記載の視線誘導装置。 The guidance unit guides the line of sight by outputting an image edited to increase the visual salience of the regions ranked higher in the second order than in the first order. The line-of-sight guidance device according to claim 1 or 2.
  4.  前記誘導部は、画素値を変化させること、又は合成処理により前記画像を編集することを特徴とする請求項3に記載の視線誘導装置。 4. The line of sight guidance device according to claim 3, wherein the guidance unit edits the image by changing a pixel value or by combining processing.
  5.  前記計算部は、ユーザの視界の風景の画像について前記第1の順序を計算し、
     前記誘導部は、前記第1の順序における順位よりも前記第2の順序における順位が高い領域に視線を誘導するためのオブジェクトを、前記ユーザが前記風景と同時に見ることが可能な態様で所定の表示装置に表示させることを特徴とする請求項1に記載の視線誘導装置。
    The calculation unit calculates the first order for images of scenery in a user's field of view;
    The guidance unit is configured to provide an object for guiding a line of sight to an area ranked higher in the second order than in the first order in a manner that allows the user to see the scenery at the same time. 2. The line-of-sight guidance device according to claim 1, which is displayed on a display device.
  6.  コンピュータによって実行される視線誘導方法であって、
     視線の集まりやすさの度合いに基づいて決定される画像中の複数の領域を、前記視線の集まりやすさの度合いのピーク値が大きい順に並べた第1の順序を計算する計算ステップと、
     指定された第2の順序と前記第1の順序とが同じであるか否かを判定する判定ステップと、
     前記判定ステップによって前記第1の順序と前記第2の順序が異なると判定された場合、第2の順序に応じた視線の誘導を行う誘導ステップと、
     を含むことを特徴とする視線誘導方法。
    A computer-implemented eye guidance method comprising:
    a calculating step of calculating a first order in which a plurality of regions in an image determined based on the degree of ease of attracting the line of sight are arranged in descending order of the peak value of the degree of ease of attracting the line of sight;
    a determination step of determining whether the designated second order and the first order are the same;
    a guiding step of guiding a line of sight according to the second order when the determining step determines that the first order and the second order are different;
    A line-of-sight guidance method comprising:
  7.  視線の集まりやすさの度合いに基づいて決定される画像中の複数の領域を、前記視線の集まりやすさの度合いのピーク値が大きい順に並べた第1の順序を計算する計算ステップと、
     指定された第2の順序と前記第1の順序とが同じであるか否かを判定する判定ステップと、
     前記判定ステップによって前記第1の順序と前記第2の順序が異なると判定された場合、第2の順序に応じた視線の誘導を行う誘導ステップと、
     をコンピュータに実行させるための視線誘導プログラム。
    a calculating step of calculating a first order in which a plurality of regions in an image determined based on the degree of ease of attracting the line of sight are arranged in descending order of the peak value of the degree of ease of attracting the line of sight;
    a determination step of determining whether the designated second order and the first order are the same;
    a guiding step of guiding a line of sight according to the second order when the determining step determines that the first order and the second order are different;
    A line-of-sight guidance program for executing on a computer.
  8.  視線の集まりやすさの度合いに基づいて決定される画像中の複数の領域を、前記視線の集まりやすさの度合いのピーク値が大きい順に並べた第1の順序を計算する計算ステップと、
     指定された第2の順序と前記第1の順序とが同じであるか否かを判定する判定ステップと、
     前記判定ステップによって前記第1の順序と前記第2の順序が異なると判定された場合、第2の順序に応じた視線の誘導を行う誘導ステップと、
     をコンピュータに実行させるための視線誘導プログラムを記憶したことを特徴とする記憶媒体。
    a calculating step of calculating a first order in which a plurality of regions in an image determined based on the degree of ease of attracting the line of sight are arranged in descending order of the peak value of the degree of ease of attracting the line of sight;
    a determination step of determining whether the designated second order and the first order are the same;
    a guiding step of guiding a line of sight according to the second order when the determining step determines that the first order and the second order are different;
    A storage medium characterized by storing a line-of-sight guidance program for causing a computer to execute.
PCT/JP2022/027707 2021-07-14 2022-07-14 Gaze guidance device, gaze guidance method, gaze guidance program, and storage medium WO2023286835A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023534856A JPWO2023286835A1 (en) 2021-07-14 2022-07-14

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021116577 2021-07-14
JP2021-116577 2021-07-14

Publications (1)

Publication Number Publication Date
WO2023286835A1 true WO2023286835A1 (en) 2023-01-19

Family

ID=84920248

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/027707 WO2023286835A1 (en) 2021-07-14 2022-07-14 Gaze guidance device, gaze guidance method, gaze guidance program, and storage medium

Country Status (2)

Country Link
JP (1) JPWO2023286835A1 (en)
WO (1) WO2023286835A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080042812A1 (en) * 2006-08-16 2008-02-21 Dunsmoir John W Systems And Arrangements For Providing Situational Awareness To An Operator Of A Vehicle
JP2016086355A (en) * 2014-10-28 2016-05-19 株式会社デンソー Gaze guide device
JP2017111649A (en) * 2015-12-17 2017-06-22 大学共同利用機関法人自然科学研究機構 Visual perception recognition assist system and visual recognition object detection system
WO2018050465A1 (en) * 2016-09-14 2018-03-22 Philips Lighting Holding B.V. Method and system for determining a saliency value for prominent roi in an image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080042812A1 (en) * 2006-08-16 2008-02-21 Dunsmoir John W Systems And Arrangements For Providing Situational Awareness To An Operator Of A Vehicle
JP2016086355A (en) * 2014-10-28 2016-05-19 株式会社デンソー Gaze guide device
JP2017111649A (en) * 2015-12-17 2017-06-22 大学共同利用機関法人自然科学研究機構 Visual perception recognition assist system and visual recognition object detection system
WO2018050465A1 (en) * 2016-09-14 2018-03-22 Philips Lighting Holding B.V. Method and system for determining a saliency value for prominent roi in an image

Also Published As

Publication number Publication date
JPWO2023286835A1 (en) 2023-01-19

Similar Documents

Publication Publication Date Title
US11194388B2 (en) Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
US11024263B2 (en) Method and apparatus for adjusting augmented reality content
CN109923462B (en) Sensing glasses
US10043238B2 (en) Augmented reality overlays based on an optically zoomed input
US9165381B2 (en) Augmented books in a mixed reality environment
US7928926B2 (en) Display apparatus and method for hands free operation that selects a function when window is within field of view
US20220058407A1 (en) Neural Network For Head Pose And Gaze Estimation Using Photorealistic Synthetic Data
JP6693223B2 (en) Information processing apparatus, information processing method, and program
EP3779959B1 (en) Information processing device, information processing method, and program
US11004273B2 (en) Information processing device and information processing method
JPWO2019026747A1 (en) Augmented reality image display device for vehicles
CN108885802B (en) Information processing apparatus, information processing method, and storage medium
WO2023286835A1 (en) Gaze guidance device, gaze guidance method, gaze guidance program, and storage medium
KR102372265B1 (en) Method of determining augmented reality information in vehicle and apparatus thereof
CN110796116A (en) Multi-panel display system, vehicle with multi-panel display system and display method
CN113525402B (en) Advanced assisted driving and unmanned visual field intelligent response method and system
JP2020161002A (en) Video display system, driving simulator system, video display method, and program
US11887220B2 (en) Ghost image mitigation for heads-up display
US20240020803A1 (en) Display control apparatus, display control method, recording medium, and display system
US20210225053A1 (en) Information processing apparatus, information processing method, and program
EP3793192A1 (en) Information processing device, information processing method, and program
CN115065818A (en) Projection method and device of head-up display system
CN117762365A (en) Navigation display method, device, vehicle and storage medium
JP2020145564A (en) Image processing device and image processing method
CN114761999A (en) Image processing method, image processing apparatus, and image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22842177

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023534856

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE