WO2016189633A1 - Degree of awareness computation device, degree of awareness computation method, and degree of awareness computation program - Google Patents
Degree of awareness computation device, degree of awareness computation method, and degree of awareness computation program Download PDFInfo
- Publication number
- WO2016189633A1 WO2016189633A1 PCT/JP2015/064968 JP2015064968W WO2016189633A1 WO 2016189633 A1 WO2016189633 A1 WO 2016189633A1 JP 2015064968 W JP2015064968 W JP 2015064968W WO 2016189633 A1 WO2016189633 A1 WO 2016189633A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- city
- building
- recognition
- unit
- Prior art date
Links
- 238000000034 method Methods 0.000 title description 3
- 230000003340 mental effect Effects 0.000 claims abstract description 74
- 238000001514 detection method Methods 0.000 claims abstract description 72
- 230000033001 locomotion Effects 0.000 claims abstract description 45
- 238000004364 calculation method Methods 0.000 claims description 100
- 230000006870 function Effects 0.000 claims description 9
- 230000008447 perception Effects 0.000 claims description 2
- 238000013461 design Methods 0.000 abstract description 14
- 230000005019 pattern of movement Effects 0.000 abstract description 2
- 230000000007 visual effect Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 8
- 238000011161 development Methods 0.000 description 6
- 230000018109 developmental process Effects 0.000 description 6
- 238000012790 confirmation Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 230000008451 emotion Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000008929 regeneration Effects 0.000 description 1
- 238000011069 regeneration method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
Definitions
- the present invention relates to a recognition degree calculation device, a recognition degree calculation method, and a recognition degree calculation program.
- Patent Literature 1 discloses that a person's living environment is evaluated and reflected in a city plan or the like.
- the present invention has been made in view of these points, and provides a degree-of-recognition calculation apparatus, a degree-of-recognition calculation method, and a degree-of-recognition calculation program capable of indexing ease of understanding for a user of a city or a building.
- the purpose is to do.
- the degree-of-recognition calculation apparatus includes an acquisition unit that acquires a moving image indicating a spatial route of a city or a building, a movement pattern of a user's line of sight corresponding to the moving image, and the moving image.
- An element detection unit that detects an element that is estimated to be recognized by the user in the city or the building based on the movement pattern of the line of sight; and the detected element in the city or the building Creation that creates a mental model showing a map that the user envisions for the city or the building based on the position specifying unit that specifies the position, the detected element, and the position of the specified element To the user of the city or the building based on the part, the mental model, and the reference information indicating the actual element corresponding to the spatial path and the position of the element See shows the ease, and a calculation unit for calculating a visibility for the city or the building of the user.
- the calculation unit may calculate the degree of recognition based on a degree of coincidence between the element and the position of the element included in the mental model and the element and the position of the element included in the reference information. .
- the calculation unit may calculate a complexity indicating the complexity of the city or the building based on the number of the elements detected by the element detection unit.
- the element detection unit may detect the element based on a composition corresponding to an image included in the moving image and a movement pattern of the line of sight.
- the position specifying unit may specify the position of the element based on a shooting position of the image associated with an image included in the moving image.
- the position specifying unit may specify a position corresponding to an image in which the element is detected as a position of the element based on a plurality of continuous images included in the moving image.
- the acquisition unit may acquire a computer graphic moving image indicating the city or a building as the moving image.
- the acquisition unit may acquire a moving image taken by a wearable terminal attached to the user as the moving image.
- the element detection unit may identify a composition pattern of an image included in a moving image photographed by the wearable terminal, and detect the element based on the composition pattern of the image.
- the acquisition unit may acquire a movement pattern of the user's line of sight detected by a line-of-sight detection device that detects the line of sight of the user.
- the recognition degree calculation apparatus includes an acquisition unit that acquires a moving image that is captured by a wearable terminal attached to a user and shows a spatial route of a city or a building, and a plurality of images included in the moving image.
- An element detection unit that detects an element that is estimated to be recognized by the user in the city or the building based on a composition corresponding to each of the above, and a position of the detected element in the city or the building.
- a position identifying unit for identifying a creating unit for creating a mental model indicating a map that the user envisions for the city or the building, based on the detected element and the position of the identified element; Based on the mental model and the reference information indicating the actual element and the position of the element in the area corresponding to the spatial route, the city or the building It shows the easy to understand for the user of the object, and a calculation unit for calculating a visibility for the city or the building of the user.
- the recognition calculation method includes a step of obtaining a moving image showing a spatial route of a city or a building and a movement pattern of a user's line of sight with respect to the moving image, which is executed by a computer.
- the recognition degree calculation program includes a computer that acquires a moving image showing a spatial route of a city or a building and a movement pattern of a user's line of sight corresponding to the moving image, An element detection unit that detects an element that is estimated to be recognized by the user in the city or the building based on the moving image and the movement pattern of the line of sight, the city or the building of the detected element A position specifying unit that specifies a position in the image, a creation that creates a mental model indicating a map that the user envisions for the city or the building based on the detected element and the position of the specified element And the mental model, and the city or the building based on the actual information corresponding to the spatial path and the reference information indicating the position of the element
- the present inventor considers that a user perceives an image of a city or a building by the user recognizing an element constituting the city or a building, and the element affects the ease of understanding for the user of the city or the building. Thought to give. And this inventor presumes that a user recognizes an element which constitutes a city and a building based on a geometric composition, a coordinate axis, and a figure which are recognized when a user visually recognizes a city and a building. It has been found that the calculation device 1 calculates the degree of recognition of the user's city or building based on the elements estimated to be recognized by the user.
- FIG. 1 is a diagram showing an overview of a recognition degree calculation device 1 according to the first embodiment.
- the degree-of-recognition calculation apparatus 1 is a computer that calculates the degree of recognition that indexes the user's intelligibility for cities and buildings. In the following description, “city and building” are referred to as “city”.
- the recognition degree calculation device 1 acquires a moving image showing a spatial route such as a city and a movement pattern of the user's line of sight corresponding to the moving image from the wearable terminal 2 attached to the user (see FIG. (1)).
- the degree-of-recognition calculation apparatus 1 detects an element that is estimated to be recognized by the user in a city or the like based on the acquired moving image and the movement pattern of the line of sight, and the position of the element ((2 in FIG. 1). )).
- the elements include “path”, “edge”, “district”, “node”, and “landmark”. Details of these elements will be described later.
- the recognition degree calculation device 1 Based on the detected element and the position of the element, the recognition degree calculation device 1 creates a mental model that shows a map that the user envisions for a city or the like ((3) in FIG. 1). By calculating the degree of coincidence with the actual map (reference information) indicating the actual element and the position of the element corresponding to the spatial route of the city or the like, the user's recognition degree for the city or the like is calculated ((( 4)).
- the mental model is composed of at least one of a symbol, a picture, a character, and the like representing the content thought by the user.
- the mental model is, for example, a mental map that is an image configured based on the user's experience, knowledge, and experience with respect to a city or the like.
- the designer of a city etc. can evaluate whether a city etc. are easy to understand for a user based on the recognition degree which the recognition degree calculation apparatus 1 computed.
- the designer can apply or apply a structure of a city or the like that is easily understood by the user to the design of a new city or the like.
- the user can contribute to evaluation and improvement of easy understanding of cities and the like. Then, the structure of the recognition degree calculation apparatus 1 and the wearable terminal 2 is demonstrated.
- FIG. 2 is a diagram illustrating configurations of the recognition degree calculation device 1 and the wearable terminal 2 according to the first embodiment.
- the wearable terminal 2 is, for example, a computer that is worn on the user's head.
- the wearable terminal 2 is a line-of-sight detection device (eye tracking tool) that captures a landscape and records a viewpoint in the landscape.
- the user wears the wearable terminal 2 and passes through a spatial route that is a continuous space in a city or the like.
- the wearable terminal 2 captures a spatial route of a city or the like to generate a moving image and generates user's line-of-sight data.
- the spatial route is a route designated in advance by a designer or the like who investigates ease of understanding of a city or the like.
- map information such as a guide board in front of a station, map information and route information displayed on a terminal such as a smartphone, route information heard from an acquaintance (for example, oral)
- the spatial route is a route designated in advance, but is not limited to this, and the user may pass through an arbitrary spatial route.
- the recognition degree calculation device 1 can obtain the mental model (mental map) of the city or the like generated for the user in the past or the user visually recognizes the map of the city or the like.
- a mental model (mental map) may be used.
- the wearable terminal 2 includes a storage unit 21 and a control unit 22.
- the storage unit 21 is, for example, a ROM (Read Only Memory), a RAM (Random Access Memory), a hard disk, an external storage medium (for example, an SD card (registered trademark)) attached to the wearable terminal 2, an external storage device, or the like. Consists of.
- the control part 22 is comprised by CPU, for example.
- the control unit 22 includes a position information acquisition unit 221, an imaging unit 222, a line-of-sight detection unit 223, and a transmission unit 224.
- the position information acquisition unit 221 acquires position information indicating the position (terminal position) of the wearable terminal 2 based on GPS signals received from a plurality of GPS (Global Positioning System) satellites.
- the position information acquisition unit 221 may calculate position information based on GPS signals received from a plurality of GPS satellites.
- the wearable terminal 2 may include, for example, an acceleration sensor (not shown) that detects acceleration and an orientation sensor (not shown) that detects an orientation corresponding to the front direction of the wearable terminal 2. Then, the position information acquisition unit 221 may calculate position information indicating a relative position with respect to a reference position (for example, a position where photographing is started) based on the detected acceleration and direction.
- the photographing unit 222 generates a moving image by photographing the landscape in the direction visually recognized by the user wearing the wearable terminal 2.
- the generated moving image includes a plurality of images, information indicating a reproduction position (elapsed time from the start of shooting) corresponding to the images, and information indicating a time when shooting of the moving image is started. It may be.
- the photographing unit 222 stores the generated moving image in the storage unit 21.
- the line-of-sight detection unit 223 detects the coordinates (line-of-sight position) of the line of sight in the moving image as the line of sight of the user in the direction in which the user is viewing.
- the line-of-sight detection unit 223 acquires direction information indicating the direction in which the wearable terminal 2 is facing, that is, the user's viewing direction.
- the line-of-sight detection unit 223 generates line-of-sight data in which the time when the line of sight is detected, the line-of-sight position, the viewing direction, and the terminal position acquired by the position information acquisition unit 221 when the line of sight is detected is generated, and the storage unit 21 Remember me.
- the moving image and the line-of-sight data are generated in parallel, and there is line-of-sight data corresponding to each of a plurality of images included in the moving image.
- a moving image in which a pointer indicating the user's line of sight is combined may be generated by the cooperation of the photographing unit 222 and the line-of-sight detection unit 223.
- the line-of-sight detection unit 223 may detect the locus of the line-of-sight movement pattern.
- the control unit 22 may analyze each of a plurality of images included in the moving image photographed by the photographing unit 222 and detect a composition indicated by the image.
- the line-of-sight data includes information indicating the terminal position acquired by the position information acquisition unit 221.
- the present invention is not limited to this.
- the control unit 22 may generate information that associates the reproduction position in the moving image and the terminal position of each of the images included in the moving image.
- the transmission unit 224 performs wireless communication with an external device such as the recognition degree calculation device 1.
- the transmission unit 224 receives the moving image and the line-of-sight data acquisition request received from the recognition degree calculation device 1, and the moving image captured by the imaging unit 222 and the line-of-sight detection unit 223 are stored in the storage unit 21.
- the generated line-of-sight data is transmitted to the recognition degree calculation device 1.
- the recognition level calculation device 1 includes an input unit 11, a display unit 12, a storage unit 13, and a control unit 14.
- the input unit 11 is configured by, for example, a keyboard and a mouse.
- the input unit 11 receives an operation input from an operator of the recognition degree calculation device 1.
- the recognition degree calculation apparatus 1 can communicate with the wearable terminal 2
- the input unit 11 of the recognition degree calculation apparatus 1 includes a button, a sensor terminal, and the like, and the wearable terminal includes the button, the sensor terminal, and the like.
- the display unit 12 includes, for example, a liquid crystal display or an organic EL (Electro-Luminescence) display.
- the display unit 12 displays, for example, the moving image visually recognized by the user, the user's mental model, the user's degree of recognition with respect to the city, and the like under the control of the control unit 14.
- the storage unit 13 includes, for example, a ROM, a RAM, a hard disk, and an external storage device connected to the recognition degree calculation device 1.
- the external storage device may be directly connected to the degree-of-recognition calculation device 1 or may be communicably connected via a communication network (not shown).
- the storage unit 13 stores a recognition degree calculation program that causes the computer to function as an acquisition unit 141, an element detection unit 142, a position specifying unit 143, a creation unit 144, and a calculation unit 145, which will be described later.
- the storage unit 13 stores moving images and line-of-sight data acquired from the wearable terminal 2.
- the control part 14 is comprised by CPU, for example.
- the control unit 14 includes an acquisition unit 141, an element detection unit 142, a position specification unit 143, a creation unit 144, and a calculation unit 145.
- functions provided in the control unit 14 will be described with reference to flowcharts.
- FIG. 3 is a flowchart showing a flow of processing in which the recognition degree calculation device 1 according to the first embodiment calculates the recognition degree.
- the acquisition part 141 acquires the moving image which shows the space route of a city or a building, and the motion pattern of the user's eyes
- FIG. 4 is a diagram illustrating a user's line of sight in one landscape included in the moving image illustrating the spatial route according to the first embodiment.
- a left line of sight VL indicating the line of sight of the left eye
- a right line of sight VR indicating the line of sight of the right eye
- an intermediate point VI indicating an intermediate point between the left line of sight VL and the right line of sight VR are displayed.
- the acquisition unit 141 analyzes the trajectory of the intermediate point VI to look around, specify confirmation, and gaze as the movement pattern of the user's line of sight.
- the left line of sight VL, the right line of sight VR, and the intermediate point VI are displayed.
- the present invention is not limited to this, and only one line of sight may be displayed.
- “Turning” is, for example, visual recognition of the same visual object for 1/5 second or less. “Turning” is a visual recognition that the user mainly locates himself / herself in space. In addition, the “look around” is a state in which, for example, the user does not grasp what an element is just by looking at a range including an arbitrary element. The acquisition part 141 determines with the user having looked around the scenery currently displayed on the said position, when the visual recognition time of the same visual recognition object is 1/5 second or less.
- “Confirmation” is a visual recognition of the same visual target longer than 1/5 second and shorter than 1 second. “Confirmation” is visual recognition in which the user localizes himself / herself in space, or visual recognition in which the user localizes the visual recognition target. “Confirmation” is a state in which the user knows that an arbitrary element exists but does not know what the element is. “Gaze” is visual recognition of the same visual target for 1 second or longer. “Gaze” is visual recognition in which the user confirms the contents of the visual recognition target. “Gaze” means that the user has a figure, form, character, etc.
- a shape of a window or door for example, a shape of a window or door, a pattern of a finishing material, a form of a building, a form of a plant, a form of furniture, a form of a car, a form of a person, a sign Information, map, computer screen, etc.
- a shape of a window or door for example, a shape of a window or door, a pattern of a finishing material, a form of a building, a form of a plant, a form of furniture, a form of a car, a form of a person, a sign Information, map, computer screen, etc.
- the element detection unit 142 detects an element that is estimated to be recognized by the user in a city or the like based on the moving image acquired by the acquisition unit 141 and the line-of-sight movement pattern (S2).
- the elements include paths, edges, districts, nodes, and landmarks proposed by Kevin Lynch. Paths, edges, districts, nodes, and landmarks are defined as follows.
- a “pass” is a path that a user may take on a daily or occasional basis or that a user may pass.
- An “edge” is a linear element different from a path among linear elements, for example, a boundary between two elements such as a coastline, a railroad track, an edge of a development site, and a wall.
- “District” refers to a portion of a city having a medium to large size, and is an element having a two-dimensional extent. The user enters the inside of the “district”.
- the “district” is recognized from the inside, but if it is visible from the outside, it is also referred to from the outside.
- a “node” is a major point within a city. The user can enter the “node”, go to the “node”, and start from the “node”.
- “Landmarks” are things that the user visually recognizes from the outside, and are simply defined things such as buildings, signboards, shops, and mountains. A “landmark” is a thing that the user can visually recognize from the outside without entering the inside.
- the element detection unit 142 analyzes each of a plurality of images included in the moving image, and identifies a one-point perspective composition or a two-point perspective composition as a composition corresponding to the image. . More specifically, for example, the element detection unit 142 binarizes each pixel value of a plurality of images included in the moving image by brightness, and specifies a binary boundary line as a perspective projection line. Then, the element detection unit 142 specifies a composition corresponding to the image based on a combination of a plurality of perspective projection lines specified in the image.
- FIG. 5 is a diagram illustrating an example in which a perspective projection line is specified for an image according to the first embodiment. In FIG. 5, it can be confirmed that a one-point perspective composition is specified for the image.
- the element detection part 142 specified the line
- the element detection unit 142 analyzes figures and coordinate axes using a perspective structure, multi-values of pixels based on saturation, extraction of lines from an image using a Hough transform algorithm, and before and after the time when the image was taken.
- the element detection unit 142 stores a plurality of images corresponding to the composition of the perspective projection in the storage unit 13 in advance, and the plurality of images included in the moving image acquired by the acquisition unit 141 are stored in the storage unit 13.
- the composition of the perspective projection method may be specified by collating with the image that is present.
- the element detection unit 142 detects an element presumed to be recognized by the user based on the identified composition and the movement pattern of the user's line of sight. Specifically, the element detection unit 142 is assumed to be recognized by the user by referring to the element specifying information that associates the element, the composition pattern, and the line-of-sight movement pattern illustrated in FIG. 6A. Detect elements. For example, when the composition of the image included in the moving image is a one-point perspective composition, and the line-of-sight movement pattern corresponding to the image is gaze, the element detection unit 142 uses “ “Path" is detected.
- the element detection unit 142 may specify an element based on the phase.
- the phase is a geometric area constructed by various physical events perceived by the user visually, and a concept such as the inside or the outside can be derived. For example, when the user can be conscious of being inside with an emotion such as being consciously wrapped in an arbitrary object (element) or conscious of an arbitrary object (element) from the outside You may be conscious of being outside with emotion.
- the element detection part 142 detects a phase, for example based on the composition of the image contained in a moving image, and the coordinate axis in the said composition. The element detection unit 142 may further detect the phase based on the movement pattern of the line of sight and the position of the object visually recognized by the user in the image.
- the line-of-sight movement pattern when the line-of-sight movement pattern is looking around and the look angle is narrow, the user tends to have a phase inside the looked object, and when the look angle is wide, the user However, there is a tendency that a phase exists outside the object that is looked around. Further, when the line-of-sight movement pattern is a plurality of confirmations on the object, the user tends to have a phase inside the object. Further, when the line-of-sight movement pattern is gaze, the phase tends to exist outside the object that the user gazes at.
- FIG. 6B is a diagram illustrating another example of the element specifying information.
- an element, a composition pattern, a line-of-sight movement pattern, and a phase are associated with each other. Even if a plurality of elements are associated with the combination of the line-of-sight movement pattern and the composition pattern, the element detection unit 142 can specify one element based on the phase. it can.
- the element detection unit 142 may specify a composition pattern of a plurality of images included in a moving image photographed by the wearable terminal 2, and may detect an element based on the composition pattern of the image.
- a composition pattern of a plurality of images and information indicating an element may be associated with each other and stored in the storage unit 13, and the element associated with the composition pattern identified by the element detection unit 142 may be identified.
- the element detection unit 142 detects an area where a geometric coordinate axis can be formed from the composition pattern, and detects an element of the city based on the area, a visual movement pattern, and a phase. Also good.
- the position specifying unit 143 specifies the position of the element detected by the element detecting unit 142 in a city or the like (S3). Specifically, the position specifying unit 143 specifies the position indicated by the position information included in the line-of-sight data corresponding to the detected element as the position of the element.
- the position specifying unit 143 may acquire, from the wearable terminal 2, information in which each image included in the moving image is associated with position information (shooting position) of the image. In this case, the position specifying unit 143 may specify the position of the element based on the shooting position of the image associated with the image including the element detected by the element detecting unit 142 among the images included in the moving image. Good.
- the position specifying unit 143 may adjust the position of the element based on the terminal position included in the line-of-sight data and the viewing direction of the user. For example, since the image shows a landscape in the viewing direction of the user, the elements included in the image are not in the position indicated by the line-of-sight data but in the viewing direction of the user. For this reason, the position specifying unit 143 may adjust the position of the element from the position indicated by the specified position information to a position corresponding to the line-of-sight direction based on the viewing direction included in the line-of-sight data.
- the position specifying unit 143 is based on a plurality of continuous images included in the moving image.
- the position of the element may be specified.
- the position specifying unit 143 analyzes a plurality of continuous images, specifies the moving direction of the user, and corresponds to the image in which the element is detected based on the moving direction and the position where the shooting of the moving image is started. Specify the position to perform. And the position specific
- FIG. 7 is a diagram showing an example of specifying the position of an element based on a plurality of images. As shown in FIG. 7, it is assumed that four continuous images (A) to (D) are included in the moving image.
- the position specifying unit 143 analyzes the images (A) to (D) and detects objects and contour lines included in the respective images as shown in (a) to (d). For example, in FIGS. 7A to 7D, as a result of analyzing the images (A) to (D), it can be confirmed that two contour lines and the objects X to Z are detected.
- the position specifying unit 143 specifies the moving direction and moving speed of the user based on the position of the object and the contour line in each of the plurality of images, and the position where the shooting of the moving image is started, the specified moving direction, Based on the specified moving speed, the position corresponding to each of the plurality of images is specified.
- the creation unit 144 creates a mental model indicating a map that the user envisions for a city or the like based on the element detected by the element detection unit 142 and the position of the element specified by the position specifying unit 143 ( S4).
- the creation unit 144 draws an image indicating the element detected by the element detection unit 142 on the plain drawing based on the position of the element specified by the position specification unit 143 as shown in FIG. A simple mental model.
- the creation unit 144 includes a spatial route such as a city, and the position of the element specified by the position specification unit 143 with respect to an electronic map (white map) to which no information such as an element is attached.
- a mental model may be created by drawing an image indicating the element detected by the element detection unit 142.
- the creation unit 144 does not create the map itself, but associates each of the plurality of elements detected by the element detection unit 142 with the position information indicating the position of the element specified by the position specification unit 143. Alternatively, it may be generated as a mental model.
- the creation unit 144 creates a 2D map model as a mental model, but may create a 3D map model.
- FIG. 10 is a diagram illustrating an example of creating a three-dimensional mental model.
- the element detection unit 142 estimates a geometric coordinate axis from the composition or figure in the image based on the image (a).
- the spatial localization is insufficient, and the element and the direction in which the element exists are ambiguous.
- the element detection unit 142 detects that the user has continuously confirmed a region indicating a road as an object in the image (b). In this case, a geometric coordinate axis in which the user feels an internal phase with respect to the object is recognized, and a path is generated in the mental model.
- the element detection unit 142 detects that the user gazes at an area indicating a road intersection as an object in the image (c). In this case, a geometric coordinate axis in which the user feels an external phase with respect to the object is recognized, and a node is generated in the mental model. Subsequently, in the image (d), it is assumed that the user looks around a building that exists beside the road as an object. In this case, a geometric coordinate axis in which the user feels an external phase with respect to the object is recognized, and a district is generated in the mental model.
- the user looks around an area indicating a road as an object.
- a geometric coordinate axis in which the user feels an internal phase with respect to the object is recognized, and a path is generated in the mental model.
- a geometric coordinate axis in which the user feels an external phase with respect to the object is recognized, and a landmark is generated in the mental model.
- the user confirms a building that can be visually recognized from between the buildings as an object.
- a landmark is generated in the mental model at a position corresponding to the confirmed position.
- the two-dimensional mental model may not be able to represent the landmark, but by using a three-dimensional mental model, Buildings that exist far away can also be reflected in the mental model. Thereby, the designer can evaluate the influence degree in the city etc. of the said landmark etc. by confirming the recognition condition of the said landmark etc. of a some user with a three-dimensional mental model, for example.
- the calculation unit 145 calculates the degree of recognition based on the degree of coincidence between the elements and element positions included in the mental model and the elements and element positions included in the actual map as illustrated in FIG. To do. More specifically, the calculation unit 145 compares the elements and element positions included in the mental model with the elements and element positions included in the actual map. Then, the calculation unit 145 calculates the detection rate of the element by the element detection unit 142 with respect to the element included in the actual map, and the error between the position of the element included in the actual map and the specified position. Then, the calculation unit 145 calculates the degree of recognition based on at least one of the calculated detection rate and error.
- the calculation unit 145 may calculate the degree of coincidence with the elements included in the actual map by limiting to only the elements of the path and the node among the elements included in the mental model. For example, when a three-dimensional mental model is created, if the comparison target map is two-dimensional, there is a problem that the degree of coincidence with elements included in the actual map cannot be calculated with high accuracy. Therefore, the calculation unit 145 corrects the notation in the mental model to correct the mental model limited to only the elements of the path and the node, and calculates the degree of coincidence between the corrected mental model and the actual map. May be.
- FIG. 12 is a diagram illustrating an example of simplifying the mental model.
- FIGS. 12A to 12C are front views of a three-dimensional mental model.
- the calculation unit 145 identifies a path and an edge corresponding to the path from the mental model illustrated in FIG. 12A, and draws a line indicating the path as illustrated in FIG. Thereafter, as illustrated in FIG. 12B, the calculation unit 145 simplifies the mental model by deleting elements other than the drawn lines from the three-dimensional mental model.
- the calculation unit 145 causes the display unit 12 to display information indicating the calculated degree of recognition, for example. Note that the calculation unit 145 may store information indicating the calculated degree of recognition in the storage unit 13.
- the calculation unit 145 calculates the complexity indicating the complexity of the city or the building based on the number of elements detected by the element detection unit 142. For example, the number of elements detected per unit time or unit distance is specified in advance as a reference value for each moving image showing a plurality of cities or the like. Then, the calculation unit 145 calculates the number of elements detected per unit time in the video based on the length of the video acquired by the acquisition unit 141 and the number of elements detected by the element detection unit 142. The complexity may be calculated based on the number of elements and the reference value.
- a mental model in which elements detected by the element detection unit 142 corresponding to each of a plurality of users are reflected on an actual map may be generated. Then, the calculation unit 145 may calculate the complexity based on the ratio of the elements detected by the element detection unit 142 to the elements included in the mental model.
- the recognition degree calculation apparatus 1 detects the element estimated that the user recognized in the city etc. based on the acquired moving image and the pattern of the movement of a gaze, It can be detected that the user perceives an image of a city or the like.
- the recognition level calculation device 1 creates a mental model indicating a map that the user envisions for a city or the like based on the element detected by the element detection unit 142 and the position of the element specified by the position specifying unit 143. Based on the mental model and the reference information indicating the actual elements and the positions of the elements, the user's degree of recognition of the city or the like is calculated. By doing in this way, the degree-of-recognition calculation apparatus 1 can index the size of the image that the user has with respect to the city or the like, that is, the user's intelligibility with respect to the city or the like.
- a designer of a city or the like can evaluate whether or not the city or the like is easily understood by the user based on the recognition degree calculated by the recognition degree calculation device 1. Then, the designer can apply or apply the structure of a city or the like evaluated as easy to understand to the user to the design of a new city or the like.
- a map creator can, for example, give a user a map to search for a city, and create a map that is easy for the user to understand based on the recognition degree of the city obtained as a result.
- the creator of the map can, for example, develop a map notation that is generally easy to understand.
- the architect of the building for example, gives the user a floor plan of the building and searches the inside of the building, and the floor plan is easy for the user to understand based on the recognition degree of the resulting building. It can be evaluated whether or not. Thereby, the designer of a building can develop the notation of the floor plan etc. which are generally easy to understand, for example.
- the map created by the map creator is handed over to search the city, or the floor plan created by the building designer is handed over the interior of the building.
- searching, in a city or the like the user can acquire and search for elements in a balanced manner, and the user's physiological direction sense and direction sense can be improved.
- the recognition degree calculation device 1 calculates the complexity indicating the complexity of the city or the building based on the number of detected elements.
- the complexity of cities and buildings related to the uniqueness of the user's experience with cities and buildings has not been indexed, so designers can see what design is highly unique to the experience.
- the complexity that users feel about cities, etc. and the uniqueness of the experience are not indexed, so it is not possible to evaluate design plans from the perspective of complexity when designing designs for cities, public buildings, etc.
- the proposed plan was difficult to agree.
- the degree-of-recognition calculation apparatus 1 can calculate the complexity, the designer evaluates how complex the city indicated by the video is with respect to other cities, etc. It can be applied to the design of high-quality cities and the like. In addition, the designer can design a city or the like in which the user can feel the complexity and uniqueness.
- the calculation unit 145 calculates the degree of recognition using the mental model and the actual map illustrated in FIG. 11 as reference information.
- the control unit of the recognition level calculation device 1 is configured such that the element detection unit 142 includes a plurality of users based on the recognition levels of a plurality of users or the detection levels of the respective elements of the plurality of users as illustrated in FIG.
- a mental model (collective intelligence map) may be created in which elements detected corresponding to each of these are reflected in an actual map.
- the calculation unit 145 recognizes based on the elements and element positions included in the mental model and the elements and element positions included in the mental model in which elements detected by a plurality of users are reflected on an actual map. May be calculated.
- the wearable terminal 2 does not detect a line of sight when a user passes through an actual space route in a city or the like, and displays a computer graphic moving image (hereinafter referred to as a CG moving image) indicating the city or the like. It differs from the first embodiment in that the line of sight when the user visually recognizes is detected.
- a different part from 1st Embodiment is demonstrated. The description of the same parts as those in the first embodiment will be omitted as appropriate.
- a user wearing the wearable terminal 2 visually recognizes a CG video showing a city or the like displayed on a display unit of a video display device (not shown).
- the line-of-sight detection unit 223 of the wearable terminal 2 detects the line of sight of the user with respect to the CG video.
- the line-of-sight detection unit 223 generates line-of-sight data in which the time when the line of sight is detected and the line-of-sight position indicating the coordinates of the line of sight in the CG moving image are generated and stored in the storage unit 21.
- the time when the line of sight is detected is associated with the reproduction time of the CG video.
- the acquisition unit 141 of the recognition degree calculation device 1 acquires a CG moving image indicating a spatial route such as a city and a movement pattern of a user's line of sight corresponding to the CG moving image. Specifically, the acquisition unit 141 acquires a CG video from the video display device, and acquires line-of-sight data corresponding to the CG video from the wearable terminal 2. And the acquisition part 141 acquires a user's gaze movement pattern based on gaze data.
- the element detection unit 142 detects an element estimated to be recognized by the user in a city or the like based on the CG moving image acquired by the acquisition unit 141 and the line-of-sight movement pattern.
- the position specifying unit 143 specifies the position of the element detected by the element detecting unit 142 in a city or the like. Specifically, the position specifying unit 143 specifies line-of-sight detection time associated with the line-of-sight position when the element is detected with reference to the line-of-sight data. And the position specific
- the creation unit 144 creates a user mental model based on the element detected by the element detection unit 142 and the position of the element specified by the position specification unit 143. Based on the mental model created by the creation unit 144 and the map (reference information) indicating the elements included in the cities and the like corresponding to the CG video and the positions of the elements (reference information), the calculation unit 145 determines the degree of recognition of the user's cities and the like. calculate.
- the degree-of-recognition calculation apparatus 1 is based on a CG moving image indicating a city or the like and a line-of-sight movement pattern corresponding to the CG moving image. An element that is assumed to be recognized by the user is detected.
- the recognition level calculation device 1 detects an element estimated to be recognized by the user for the city and the like, and calculates the level of recognition. be able to. Therefore, a designer in a planning stage or a city under development can evaluate whether it is easy for the user to understand during the planning stage or development in the city or the like.
- the wearable terminal 2 acquires the line-of-sight data when the user visually recognizes the CG video, but the present invention is not limited to this.
- the wearable terminal 2 may acquire line-of-sight data when a user visually recognizes a moving image obtained by photographing a city or the like in advance.
- the recognition degree calculation apparatus 1 can calculate the recognition degree of the user with respect to the city or the like even if the user does not go to a place where the city or the like is.
- the CG video is a video showing a predetermined spatial route in a city or the like, but is not limited thereto.
- the CG moving image may be a moving image indicating a place where the user cannot actually pass, such as in the air. By doing in this way, the user who has visually recognized the CG video can recognize an image of a city or the like that is not normally recognized.
- the CG video is, for example, a CG video in a VR (Virtual Reality) format or an AR (Augmented Reality: Augmented Reality) format that can display an arbitrary space of a pre-designed three-dimensional space model. May be.
- a VR Virtual Reality
- AR Augmented Reality
- a moving image in which the CG is changed corresponding to the line of sight, or a format in which the CG does not change and is stored as a log in association with the instruction information from the user and the shape of the three-dimensional CG, and can be confirmed later May be a video.
- ⁇ Third Embodiment> [Acquires a line-of-sight movement pattern based on a video taken by the wearable terminal 2] Subsequently, a third embodiment will be described.
- the wearable terminal 2 generates only the moving image without generating the line-of-sight data
- the recognition degree calculation device 1 displays the movement pattern of the user's line of sight based on the moving image acquired from the wearable terminal 2. It differs from the first embodiment in that it is acquired.
- it is assumed that a terminal that does not have a function of generating line-of-sight data is used as the wearable terminal 2.
- the acquisition unit 141 acquires, from the wearable terminal 2, a moving image that indicates a spatial route of a city or the like photographed by the wearable terminal 2.
- the element detection unit 142 detects an element estimated to be recognized by the user in a city or the like based on the composition corresponding to each of the plurality of images included in the moving image. Specifically, the element detection unit 142 identifies a composition corresponding to the image by analyzing each of the plurality of images included in the moving image. Then, the element detection unit 142 refers to the element specifying information illustrated in FIG. 6A or 6B and specifies an element associated with the specified composition, thereby detecting an element that is estimated to be recognized by the user.
- the position specifying unit 143 specifies the position of the detected element in the city or the like. Specifically, in the third embodiment, the terminal position acquired by the position information acquisition unit 221 is associated with the playback position of the moving image. The position specifying unit 143 specifies the playback position of the moving image based on the image when the element is detected, and specifies the terminal position associated with the playback position as the position of the detected element.
- the functions of the creation unit 144 and the calculation unit 145 are the same as those in the first embodiment, and a description thereof will be omitted.
- the recognition degree calculation device 1 can identify an element that is estimated to be recognized by a user in a city or the like based only on a moving image, and thus obtains user's line-of-sight data. Even if it is not possible, it is possible to calculate the user's degree of recognition of the city.
- the degree-of-recognition calculation apparatus 1 acquires the moving image and the line-of-sight data from the wearable terminal 2 by communication, but is not limited thereto.
- an external storage medium removed from the wearable terminal 2 may be attached to the recognition degree calculation device 1, and the recognition degree calculation device 1 may acquire moving images and line-of-sight data from the external storage medium.
- the recognition level calculation device 1 and the wearable terminal 2 are different devices, but the recognition level calculation device 1 and the wearable terminal 2 may be integrated devices.
- the wearable terminal 2 may have a function included in the recognition degree calculation device 1, or the recognition degree calculation device 1 may have a function included in the wearable terminal 2.
- the input unit of the wearable terminal 2 may be configured by a button, a sensor terminal, or the like. Then, the line of sight input to the wearable terminal 2 or the gesture input by the user's hand or the like in the range where the wearable terminal 2 can be photographed (within the user's field of view) may be received by the button or the sensor terminal.
- SYMBOLS 1 Recognition degree calculation apparatus, 11 ... Input part, 12 ... Display part, 13 ... Memory
Landscapes
- Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Provided is a degree of awareness computation device 1, comprising: an acquisition unit 141 which acquires a motion video which shows a spatial path of a city or a building, and a pattern of movement of a user's line of sight in the motion video; an element detection unit 142 which detects elements in the city or building which, on the basis of the motion video and the pattern of line of sight movement, are assumed to have been recognized by the user; a position identification unit 143 which identifies the positions in the city or building of the detected elements; a creation unit 144 which, on the basis of the detected elements and the identified positions of the elements, creates a mental model which represents a map of the city or building which the user would visualize; and a computation unit 145 which, on the basis of the mental model and reference information which represents actual elements and the positions of the elements which correspond to the spatial path, computes the user's degree of awareness of the city or building, which signifies how intuitive the user finds the design of the city or building.
Description
本発明は、認知度算出装置、認知度算出方法及び認知度算出プログラムに関する。
The present invention relates to a recognition degree calculation device, a recognition degree calculation method, and a recognition degree calculation program.
近年、様々な観点に基づいて都市開発や建築計画が行われている。例えば、特許文献1には、人の生活環境を評価し、都市計画等に反映させることが開示されている。
In recent years, urban development and architectural planning have been carried out based on various viewpoints. For example, Patent Literature 1 discloses that a person's living environment is evaluated and reflected in a city plan or the like.
ところで、従来の都市開発や建築計画では、設計者が主観に基づいて都市や建築物を設計しており、都市や建築物のユーザに対する分かり易さが十分に反映された設計が行われていないという問題があった。すなわち、従来は、都市や建築物のユーザに対する分かり易さが指標化されていなかったため、設計者は、どのような設計がユーザにとって分かり易いものなのか認識できず、ユーザの都市や建築物における動線や視認性、さらには、都市や建築物に対するユーザの認知等を意識した設計を十分に行えないという問題があった。例えば、ユーザの都市や建築物における分かり易さが指標化されていないため、都市や公共建築等の設計検討時に、分かり易さといった観点から設計案の評価ができず、設計者が提示した計画が合意され難いという問題があった。
By the way, in conventional urban development and architectural plans, designers design cities and buildings based on subjectivity, and designs that fully reflect the intelligibility to users of cities and buildings are not performed. There was a problem. In other words, since the intelligibility for users of cities and buildings has not been indexed in the past, the designer cannot recognize what kind of design is easy for the user to understand. There has been a problem that the design in consideration of the flow line, the visibility, and the user's perception of the city and the building cannot be sufficiently performed. For example, because the intelligibility of the user's city or building is not indexed, the design proposal cannot be evaluated from the viewpoint of intelligibility when designing a city or public building, and the plan presented by the designer There was a problem that it was difficult to agree.
そこで、本発明はこれらの点に鑑みてなされたものであり、都市や建築物のユーザに対する分かり易さを指標化することができる認知度算出装置、認知度算出方法及び認知度算出プログラムを提供することを目的とする。
Therefore, the present invention has been made in view of these points, and provides a degree-of-recognition calculation apparatus, a degree-of-recognition calculation method, and a degree-of-recognition calculation program capable of indexing ease of understanding for a user of a city or a building. The purpose is to do.
本発明の第1の態様に係る認知度算出装置は、都市又は建築物の空間経路を示す動画と、前記動画に対応するユーザの視線の動きのパターンとを取得する取得部と、前記動画と、前記視線の動きのパターンとに基づいて、前記都市又は前記建築物において前記ユーザが認識したと推測される要素を検出する要素検出部と、検出された前記要素の前記都市又は前記建築物における位置を特定する位置特定部と、検出された前記要素と、特定された前記要素の位置とに基づいて、前記ユーザが前記都市又は前記建築物に対して思い描く地図を示すメンタルモデルを作成する作成部と、前記メンタルモデルと、前記空間経路に対応する実際の前記要素及び前記要素の位置を示す基準情報とに基づいて、前記都市又は前記建築物の前記ユーザに対する分かり易さを示す、前記ユーザの前記都市又は前記建築物に対する認知度を算出する算出部と、を備える。
The degree-of-recognition calculation apparatus according to the first aspect of the present invention includes an acquisition unit that acquires a moving image indicating a spatial route of a city or a building, a movement pattern of a user's line of sight corresponding to the moving image, and the moving image. An element detection unit that detects an element that is estimated to be recognized by the user in the city or the building based on the movement pattern of the line of sight; and the detected element in the city or the building Creation that creates a mental model showing a map that the user envisions for the city or the building based on the position specifying unit that specifies the position, the detected element, and the position of the specified element To the user of the city or the building based on the part, the mental model, and the reference information indicating the actual element corresponding to the spatial path and the position of the element See shows the ease, and a calculation unit for calculating a visibility for the city or the building of the user.
前記算出部は、前記メンタルモデルに含まれる前記要素及び前記要素の位置と、前記基準情報に含まれる前記要素及び前記要素の位置との一致度に基づいて、前記認知度を算出してもよい。
前記算出部は、前記要素検出部が検出した前記要素の数に基づいて、前記都市又は建築物の複雑さを示す複雑度を算出してもよい。
前記要素検出部は、前記動画に含まれる画像に対応する構図と、前記視線の動きのパターンとに基づいて、前記要素を検出してもよい。 The calculation unit may calculate the degree of recognition based on a degree of coincidence between the element and the position of the element included in the mental model and the element and the position of the element included in the reference information. .
The calculation unit may calculate a complexity indicating the complexity of the city or the building based on the number of the elements detected by the element detection unit.
The element detection unit may detect the element based on a composition corresponding to an image included in the moving image and a movement pattern of the line of sight.
前記算出部は、前記要素検出部が検出した前記要素の数に基づいて、前記都市又は建築物の複雑さを示す複雑度を算出してもよい。
前記要素検出部は、前記動画に含まれる画像に対応する構図と、前記視線の動きのパターンとに基づいて、前記要素を検出してもよい。 The calculation unit may calculate the degree of recognition based on a degree of coincidence between the element and the position of the element included in the mental model and the element and the position of the element included in the reference information. .
The calculation unit may calculate a complexity indicating the complexity of the city or the building based on the number of the elements detected by the element detection unit.
The element detection unit may detect the element based on a composition corresponding to an image included in the moving image and a movement pattern of the line of sight.
前記位置特定部は、前記動画に含まれる画像に関連付けられている前記画像の撮影位置に基づいて、前記要素の位置を特定してもよい。
前記位置特定部は、前記動画に含まれる、連続する複数の画像に基づいて、前記要素を検出した画像に対応する位置を前記要素の位置として特定してもよい。
前記取得部は、前記動画として、前記都市又は建築物を示すコンピュータ・グラフィック動画を取得してもよい。
前記取得部は、前記動画として、前記ユーザに装着したウェアラブル端末によって撮影した動画を取得してもよい。 The position specifying unit may specify the position of the element based on a shooting position of the image associated with an image included in the moving image.
The position specifying unit may specify a position corresponding to an image in which the element is detected as a position of the element based on a plurality of continuous images included in the moving image.
The acquisition unit may acquire a computer graphic moving image indicating the city or a building as the moving image.
The acquisition unit may acquire a moving image taken by a wearable terminal attached to the user as the moving image.
前記位置特定部は、前記動画に含まれる、連続する複数の画像に基づいて、前記要素を検出した画像に対応する位置を前記要素の位置として特定してもよい。
前記取得部は、前記動画として、前記都市又は建築物を示すコンピュータ・グラフィック動画を取得してもよい。
前記取得部は、前記動画として、前記ユーザに装着したウェアラブル端末によって撮影した動画を取得してもよい。 The position specifying unit may specify the position of the element based on a shooting position of the image associated with an image included in the moving image.
The position specifying unit may specify a position corresponding to an image in which the element is detected as a position of the element based on a plurality of continuous images included in the moving image.
The acquisition unit may acquire a computer graphic moving image indicating the city or a building as the moving image.
The acquisition unit may acquire a moving image taken by a wearable terminal attached to the user as the moving image.
前記要素検出部は、前記ウェアラブル端末によって撮影した動画に含まれる画像の構図のパターンを特定し、当該画像の構図のパターンにさらに基づいて前記要素を検出してもよい。
前記取得部は、前記ユーザの視線を検出する視線検出装置が検出した前記ユーザの視線の動きのパターンを取得してもよい。 The element detection unit may identify a composition pattern of an image included in a moving image photographed by the wearable terminal, and detect the element based on the composition pattern of the image.
The acquisition unit may acquire a movement pattern of the user's line of sight detected by a line-of-sight detection device that detects the line of sight of the user.
前記取得部は、前記ユーザの視線を検出する視線検出装置が検出した前記ユーザの視線の動きのパターンを取得してもよい。 The element detection unit may identify a composition pattern of an image included in a moving image photographed by the wearable terminal, and detect the element based on the composition pattern of the image.
The acquisition unit may acquire a movement pattern of the user's line of sight detected by a line-of-sight detection device that detects the line of sight of the user.
本発明の第2の態様に係る認知度算出装置は、ユーザに装着したウェアラブル端末によって撮影され、都市又は建築物の空間経路を示す動画を取得する取得部と、前記動画に含まれる複数の画像のそれぞれに対応する構図に基づいて、前記都市又は前記建築物において前記ユーザが認識したと推測される要素を検出する要素検出部と、検出された前記要素の前記都市又は前記建築物における位置を特定する位置特定部と、検出された前記要素と、特定された前記要素の位置に基づいて、前記ユーザが前記都市又は前記建築物に対して思い描く地図を示すメンタルモデルを作成する作成部と、前記メンタルモデルと、前記空間経路に対応する領域における実際の前記要素及び前記要素の位置を示す基準情報とに基づいて、前記都市又は前記建築物の前記ユーザに対する分かり易さを示す、前記ユーザの前記都市又は前記建築物に対する認知度を算出する算出部と、を備える。
The recognition degree calculation apparatus according to the second aspect of the present invention includes an acquisition unit that acquires a moving image that is captured by a wearable terminal attached to a user and shows a spatial route of a city or a building, and a plurality of images included in the moving image. An element detection unit that detects an element that is estimated to be recognized by the user in the city or the building based on a composition corresponding to each of the above, and a position of the detected element in the city or the building. A position identifying unit for identifying, a creating unit for creating a mental model indicating a map that the user envisions for the city or the building, based on the detected element and the position of the identified element; Based on the mental model and the reference information indicating the actual element and the position of the element in the area corresponding to the spatial route, the city or the building It shows the easy to understand for the user of the object, and a calculation unit for calculating a visibility for the city or the building of the user.
本発明の第3の態様に係る認知度算出方法は、コンピュータにより実行される、都市又は建築物の空間経路を示す動画と、前記動画に対するユーザの視線の動きのパターンとを取得するステップと、前記動画と、前記視線の動きのパターンとに基づいて、前記都市又は前記建築物において前記ユーザが認識したと推測される要素を検出するステップと、検出された前記要素の前記都市又は前記建築物における位置を検出するステップと、検出された前記要素と、前記要素の位置とに基づいて、前記ユーザが前記都市又は前記建築物に対して思い描く地図を示すメンタルモデルを作成するステップと、前記メンタルモデルと、前記空間経路に対応する実際の前記要素及び前記要素の位置を示す基準情報とに基づいて、前記都市又は前記建築物の前記ユーザに対する分かり易さを示す、前記ユーザの前記都市又は前記建築物に対する認知度を算出するステップと、を備える。
The recognition calculation method according to the third aspect of the present invention includes a step of obtaining a moving image showing a spatial route of a city or a building and a movement pattern of a user's line of sight with respect to the moving image, which is executed by a computer. Based on the moving image and the movement pattern of the line of sight, detecting an element estimated to be recognized by the user in the city or the building, and the city or the building of the detected element Detecting a position in the map, creating a mental model showing a map that the user envisions for the city or the building based on the detected element and the position of the element, and the mental Based on the model and the actual information corresponding to the spatial path and the reference information indicating the position of the element, the city or the building It shows the easy to understand for the user, and a step of calculating the awareness of the city or the building of the user.
本発明の第4の態様に係る認知度算出プログラムは、コンピュータを、都市又は建築物の空間経路を示す動画と、前記動画に対応するユーザの視線の動きのパターンとを取得する取得部、前記動画と、前記視線の動きのパターンとに基づいて、前記都市又は前記建築物において前記ユーザが認識したと推測される要素を検出する要素検出部、検出された前記要素の前記都市又は前記建築物における位置を特定する位置特定部、検出された前記要素と、特定された前記要素の位置とに基づいて、前記ユーザが前記都市又は前記建築物に対して思い描く地図を示すメンタルモデルを作成する作成部、及び前記メンタルモデルと、前記空間経路に対応する実際の前記要素及び前記要素の位置を示す基準情報とに基づいて、前記都市又は前記建築物の前記ユーザに対する分かり易さを示す、前記ユーザの前記都市又は前記建築物に対する認知度を算出する算出部、として機能させる。
The recognition degree calculation program according to the fourth aspect of the present invention includes a computer that acquires a moving image showing a spatial route of a city or a building and a movement pattern of a user's line of sight corresponding to the moving image, An element detection unit that detects an element that is estimated to be recognized by the user in the city or the building based on the moving image and the movement pattern of the line of sight, the city or the building of the detected element A position specifying unit that specifies a position in the image, a creation that creates a mental model indicating a map that the user envisions for the city or the building based on the detected element and the position of the specified element And the mental model, and the city or the building based on the actual information corresponding to the spatial path and the reference information indicating the position of the element The notice to the user indicating the ease, calculation unit for calculating a visibility for the city or the building of the user, to function as a.
本発明によれば、都市や建築物のユーザに対する分かり易さを指標化することができるという効果を奏する。
According to the present invention, it is possible to index the ease of understanding for the user of a city or a building.
<第1の実施形態>
以下、本発明の第1の実施形態について説明する。
本発明者は、都市や建築物を構成する要素をユーザが認識することにより、ユーザが、都市や建築物に対するイメージを知覚すると考え、当該要素が都市や建築物のユーザに対する分かり易さに影響を与えるものと考えた。そして、本発明者は、ユーザが都市や建築物を視認した際に認識する幾何学的構図や座標軸や図形に基づいて、ユーザが都市や建築物を構成する要素を認識すると推定し、認知度算出装置1によって、ユーザが認識したと推定される要素に基づいてユーザの都市や建築物に対する認知度を算出することを見出した。 <First Embodiment>
Hereinafter, a first embodiment of the present invention will be described.
The present inventor considers that a user perceives an image of a city or a building by the user recognizing an element constituting the city or a building, and the element affects the ease of understanding for the user of the city or the building. Thought to give. And this inventor presumes that a user recognizes an element which constitutes a city and a building based on a geometric composition, a coordinate axis, and a figure which are recognized when a user visually recognizes a city and a building. It has been found that the calculation device 1 calculates the degree of recognition of the user's city or building based on the elements estimated to be recognized by the user.
以下、本発明の第1の実施形態について説明する。
本発明者は、都市や建築物を構成する要素をユーザが認識することにより、ユーザが、都市や建築物に対するイメージを知覚すると考え、当該要素が都市や建築物のユーザに対する分かり易さに影響を与えるものと考えた。そして、本発明者は、ユーザが都市や建築物を視認した際に認識する幾何学的構図や座標軸や図形に基づいて、ユーザが都市や建築物を構成する要素を認識すると推定し、認知度算出装置1によって、ユーザが認識したと推定される要素に基づいてユーザの都市や建築物に対する認知度を算出することを見出した。 <First Embodiment>
Hereinafter, a first embodiment of the present invention will be described.
The present inventor considers that a user perceives an image of a city or a building by the user recognizing an element constituting the city or a building, and the element affects the ease of understanding for the user of the city or the building. Thought to give. And this inventor presumes that a user recognizes an element which constitutes a city and a building based on a geometric composition, a coordinate axis, and a figure which are recognized when a user visually recognizes a city and a building. It has been found that the calculation device 1 calculates the degree of recognition of the user's city or building based on the elements estimated to be recognized by the user.
図1は、第1の実施形態に係る認知度算出装置1の概要を示す図である。認知度算出装置1は、都市や建築物に対するユーザの分かり易さを指標化した認知度を算出するコンピュータである。なお、以下の説明において、「都市や建築物」を「都市等」という。
FIG. 1 is a diagram showing an overview of a recognition degree calculation device 1 according to the first embodiment. The degree-of-recognition calculation apparatus 1 is a computer that calculates the degree of recognition that indexes the user's intelligibility for cities and buildings. In the following description, “city and building” are referred to as “city”.
具体的には、認知度算出装置1は、ユーザに装着したウェアラブル端末2等から、都市等の空間経路を示す動画と、当該動画に対応するユーザの視線の動きのパターンとを取得する(図1の(1))。
Specifically, the recognition degree calculation device 1 acquires a moving image showing a spatial route such as a city and a movement pattern of the user's line of sight corresponding to the moving image from the wearable terminal 2 attached to the user (see FIG. (1)).
認知度算出装置1は、取得した動画と、視線の動きのパターンとに基づいて、都市等においてユーザが認識したと推測される要素と、当該要素の位置とを検出する(図1の(2))。ここで、要素には、「パス」、「エッジ」、「ディストリクト」、「ノード」及び「ランドマーク」が含まれている。これらの要素の詳細については、後述する。
The degree-of-recognition calculation apparatus 1 detects an element that is estimated to be recognized by the user in a city or the like based on the acquired moving image and the movement pattern of the line of sight, and the position of the element ((2 in FIG. 1). )). Here, the elements include “path”, “edge”, “district”, “node”, and “landmark”. Details of these elements will be described later.
認知度算出装置1は、検出した要素と、当該要素の位置とに基づいて、ユーザが都市等に対して思い描く地図を示すメンタルモデルを作成し(図1の(3))、当該メンタルモデルと、都市等の空間経路に対応する実際の要素及び要素の位置を示す実際の地図(基準情報)との一致度を算出することにより、都市等に対するユーザの認知度を算出する(図1の(4))。ここで、メンタルモデルは、ユーザが思考した内容を表す記号、絵及び文字等の少なくともいずれかから構成される。メンタルモデルは、例えば、ユーザの都市等に対する経験、知識及び体験に基づいて構成されるイメージであるメンタルマップである。
Based on the detected element and the position of the element, the recognition degree calculation device 1 creates a mental model that shows a map that the user envisions for a city or the like ((3) in FIG. 1). By calculating the degree of coincidence with the actual map (reference information) indicating the actual element and the position of the element corresponding to the spatial route of the city or the like, the user's recognition degree for the city or the like is calculated ((( 4)). Here, the mental model is composed of at least one of a symbol, a picture, a character, and the like representing the content thought by the user. The mental model is, for example, a mental map that is an image configured based on the user's experience, knowledge, and experience with respect to a city or the like.
このようにすることで、都市等の設計者は、認知度算出装置1が算出した認知度に基づいて、都市等がユーザに分かり易いか否かを評価することができる。これにより、設計者は、ユーザに分かり易い都市等の構造を、新たな都市等の設計に適用したり、応用したりすることができる。また、ユーザは、都市等の分かり易さの評価や改善に貢献することができる。
続いて、認知度算出装置1及びウェアラブル端末2の構成について説明する。 By doing in this way, the designer of a city etc. can evaluate whether a city etc. are easy to understand for a user based on the recognition degree which the recognition degree calculation apparatus 1 computed. As a result, the designer can apply or apply a structure of a city or the like that is easily understood by the user to the design of a new city or the like. In addition, the user can contribute to evaluation and improvement of easy understanding of cities and the like.
Then, the structure of the recognition degree calculation apparatus 1 and the wearable terminal 2 is demonstrated.
続いて、認知度算出装置1及びウェアラブル端末2の構成について説明する。 By doing in this way, the designer of a city etc. can evaluate whether a city etc. are easy to understand for a user based on the recognition degree which the recognition degree calculation apparatus 1 computed. As a result, the designer can apply or apply a structure of a city or the like that is easily understood by the user to the design of a new city or the like. In addition, the user can contribute to evaluation and improvement of easy understanding of cities and the like.
Then, the structure of the recognition degree calculation apparatus 1 and the wearable terminal 2 is demonstrated.
[ウェアラブル端末2の構成]
図2は、第1の実施形態に係る認知度算出装置1及びウェアラブル端末2の構成を示す図である。
まず、ウェアラブル端末2の構成について説明する。ウェアラブル端末2は、例えば、ユーザの頭部に装着するコンピュータである。ウェアラブル端末2は、風景を撮影するとともに当該風景における視点を記録する視線検出装置(アイトラッキングツール)である。ユーザは、ウェアラブル端末2を装着して、都市等における連続した空間である空間経路を通過する。ウェアラブル端末2は、都市等の空間経路を撮影して動画を生成するとともに、ユーザの視線データを生成する。ここで、空間経路は、予め都市等の分かり易さを調査する設計者等から指定された経路であるものとする。例えば、空間経路を示す情報としては、ユーザに対して、駅前等での案内板等の地図情報、スマートフォン等の端末に表示される地図情報や経路情報、知人から聞いた経路情報(例えば、口頭による道案内を示す文言)等が挙げられる。なお、空間経路は、予め指定された経路であるものとするが、これに限らず、ユーザが任意の空間経路を通過してもよい。また、ユーザに指定する空間経路を示す情報として、認知度算出装置1がユーザに対して過去に生成した都市等のメンタルモデル(メンタルマップ)や、ユーザが都市等の地図を視認することで得られるメンタルモデル(メンタルマップ)を用いてもよい。 [Configuration of Wearable Terminal 2]
FIG. 2 is a diagram illustrating configurations of the recognition degree calculation device 1 and the wearable terminal 2 according to the first embodiment.
First, the configuration of the wearable terminal 2 will be described. The wearable terminal 2 is, for example, a computer that is worn on the user's head. The wearable terminal 2 is a line-of-sight detection device (eye tracking tool) that captures a landscape and records a viewpoint in the landscape. The user wears the wearable terminal 2 and passes through a spatial route that is a continuous space in a city or the like. The wearable terminal 2 captures a spatial route of a city or the like to generate a moving image and generates user's line-of-sight data. Here, it is assumed that the spatial route is a route designated in advance by a designer or the like who investigates ease of understanding of a city or the like. For example, as information indicating the spatial route, map information such as a guide board in front of a station, map information and route information displayed on a terminal such as a smartphone, route information heard from an acquaintance (for example, oral) For example). The spatial route is a route designated in advance, but is not limited to this, and the user may pass through an arbitrary spatial route. Further, as information indicating the spatial route designated to the user, the recognition degree calculation device 1 can obtain the mental model (mental map) of the city or the like generated for the user in the past or the user visually recognizes the map of the city or the like. A mental model (mental map) may be used.
図2は、第1の実施形態に係る認知度算出装置1及びウェアラブル端末2の構成を示す図である。
まず、ウェアラブル端末2の構成について説明する。ウェアラブル端末2は、例えば、ユーザの頭部に装着するコンピュータである。ウェアラブル端末2は、風景を撮影するとともに当該風景における視点を記録する視線検出装置(アイトラッキングツール)である。ユーザは、ウェアラブル端末2を装着して、都市等における連続した空間である空間経路を通過する。ウェアラブル端末2は、都市等の空間経路を撮影して動画を生成するとともに、ユーザの視線データを生成する。ここで、空間経路は、予め都市等の分かり易さを調査する設計者等から指定された経路であるものとする。例えば、空間経路を示す情報としては、ユーザに対して、駅前等での案内板等の地図情報、スマートフォン等の端末に表示される地図情報や経路情報、知人から聞いた経路情報(例えば、口頭による道案内を示す文言)等が挙げられる。なお、空間経路は、予め指定された経路であるものとするが、これに限らず、ユーザが任意の空間経路を通過してもよい。また、ユーザに指定する空間経路を示す情報として、認知度算出装置1がユーザに対して過去に生成した都市等のメンタルモデル(メンタルマップ)や、ユーザが都市等の地図を視認することで得られるメンタルモデル(メンタルマップ)を用いてもよい。 [Configuration of Wearable Terminal 2]
FIG. 2 is a diagram illustrating configurations of the recognition degree calculation device 1 and the wearable terminal 2 according to the first embodiment.
First, the configuration of the wearable terminal 2 will be described. The wearable terminal 2 is, for example, a computer that is worn on the user's head. The wearable terminal 2 is a line-of-sight detection device (eye tracking tool) that captures a landscape and records a viewpoint in the landscape. The user wears the wearable terminal 2 and passes through a spatial route that is a continuous space in a city or the like. The wearable terminal 2 captures a spatial route of a city or the like to generate a moving image and generates user's line-of-sight data. Here, it is assumed that the spatial route is a route designated in advance by a designer or the like who investigates ease of understanding of a city or the like. For example, as information indicating the spatial route, map information such as a guide board in front of a station, map information and route information displayed on a terminal such as a smartphone, route information heard from an acquaintance (for example, oral) For example). The spatial route is a route designated in advance, but is not limited to this, and the user may pass through an arbitrary spatial route. Further, as information indicating the spatial route designated to the user, the recognition degree calculation device 1 can obtain the mental model (mental map) of the city or the like generated for the user in the past or the user visually recognizes the map of the city or the like. A mental model (mental map) may be used.
ウェアラブル端末2は、記憶部21と、制御部22とを備える。
記憶部21は、例えば、ROM(Read Only Memory)、RAM(Random Access Memory)、ハードディスク、及びウェアラブル端末2に装着された外部記憶媒体(例えば、SDカード(登録商標))や、外部記憶装置等により構成される。 The wearable terminal 2 includes astorage unit 21 and a control unit 22.
Thestorage unit 21 is, for example, a ROM (Read Only Memory), a RAM (Random Access Memory), a hard disk, an external storage medium (for example, an SD card (registered trademark)) attached to the wearable terminal 2, an external storage device, or the like. Consists of.
記憶部21は、例えば、ROM(Read Only Memory)、RAM(Random Access Memory)、ハードディスク、及びウェアラブル端末2に装着された外部記憶媒体(例えば、SDカード(登録商標))や、外部記憶装置等により構成される。 The wearable terminal 2 includes a
The
制御部22は、例えば、CPUにより構成される。制御部22は、位置情報取得部221と、撮影部222と、視線検出部223と、送信部224とを備える。
位置情報取得部221は、複数のGPS(Global Positioning System)衛星から受信したGPS信号に基づく、ウェアラブル端末2の位置(端末位置)を示す位置情報を取得する。位置情報取得部221は、複数のGPS衛星から受信したGPS信号に基づいて位置情報を算出してもよい。
なお、ウェアラブル端末2は、例えば、加速度を検出する加速度センサ(不図示)と、ウェアラブル端末2の正面方向に対応する方位を検出する方位センサ(不図示)とを備えてもよい。そして、位置情報取得部221は、検出された加速度及び方位に基づいて、基準の位置(例えば、撮影を開始した位置)に対する相対的な位置を示す位置情報を算出してもよい。 Thecontrol part 22 is comprised by CPU, for example. The control unit 22 includes a position information acquisition unit 221, an imaging unit 222, a line-of-sight detection unit 223, and a transmission unit 224.
The positioninformation acquisition unit 221 acquires position information indicating the position (terminal position) of the wearable terminal 2 based on GPS signals received from a plurality of GPS (Global Positioning System) satellites. The position information acquisition unit 221 may calculate position information based on GPS signals received from a plurality of GPS satellites.
The wearable terminal 2 may include, for example, an acceleration sensor (not shown) that detects acceleration and an orientation sensor (not shown) that detects an orientation corresponding to the front direction of the wearable terminal 2. Then, the positioninformation acquisition unit 221 may calculate position information indicating a relative position with respect to a reference position (for example, a position where photographing is started) based on the detected acceleration and direction.
位置情報取得部221は、複数のGPS(Global Positioning System)衛星から受信したGPS信号に基づく、ウェアラブル端末2の位置(端末位置)を示す位置情報を取得する。位置情報取得部221は、複数のGPS衛星から受信したGPS信号に基づいて位置情報を算出してもよい。
なお、ウェアラブル端末2は、例えば、加速度を検出する加速度センサ(不図示)と、ウェアラブル端末2の正面方向に対応する方位を検出する方位センサ(不図示)とを備えてもよい。そして、位置情報取得部221は、検出された加速度及び方位に基づいて、基準の位置(例えば、撮影を開始した位置)に対する相対的な位置を示す位置情報を算出してもよい。 The
The position
The wearable terminal 2 may include, for example, an acceleration sensor (not shown) that detects acceleration and an orientation sensor (not shown) that detects an orientation corresponding to the front direction of the wearable terminal 2. Then, the position
撮影部222は、ウェアラブル端末2を装着しているユーザが視認している方向の風景を撮影することにより動画を生成する。ここで、生成された動画は、複数の画像と、当該画像に対応する再生位置(撮影開始からの経過時間)を示す情報と、動画の撮影を開始した時刻を示す情報とを含んで構成されていてもよい。撮影部222は、生成した動画を記憶部21に記憶させる。
The photographing unit 222 generates a moving image by photographing the landscape in the direction visually recognized by the user wearing the wearable terminal 2. Here, the generated moving image includes a plurality of images, information indicating a reproduction position (elapsed time from the start of shooting) corresponding to the images, and information indicating a time when shooting of the moving image is started. It may be. The photographing unit 222 stores the generated moving image in the storage unit 21.
視線検出部223は、ユーザが視認している方向の風景におけるユーザの視線として、動画における当該視線の座標(視線位置)を検出する。また、視線検出部223は、ウェアラブル端末2が向いている方向、すなわち、ユーザの視認方向を示す方向情報を取得する。視線検出部223は、視線を検出した時刻と、視線位置と、視認方向と、視線を検出したときに位置情報取得部221が取得した端末位置とを関連付けた視線データを生成して記憶部21に記憶させる。
The line-of-sight detection unit 223 detects the coordinates (line-of-sight position) of the line of sight in the moving image as the line of sight of the user in the direction in which the user is viewing. The line-of-sight detection unit 223 acquires direction information indicating the direction in which the wearable terminal 2 is facing, that is, the user's viewing direction. The line-of-sight detection unit 223 generates line-of-sight data in which the time when the line of sight is detected, the line-of-sight position, the viewing direction, and the terminal position acquired by the position information acquisition unit 221 when the line of sight is detected is generated, and the storage unit 21 Remember me.
ここで、動画及び視線データは、並列して生成され、動画に含まれる複数の画像のそれぞれに対応する視線データが存在するものとする。なお、撮影部222と、視線検出部223とが協働することにより、ユーザの視線を示すポインタが合成された動画が生成されるようにしてもよい。また、視線検出部223は、視線の動きのパターンの軌跡を検出してもよい。また、制御部22は、撮影部222が撮影した動画に含まれる複数の画像のそれぞれを解析し、画像が示す構図を検出してもよい。
Here, it is assumed that the moving image and the line-of-sight data are generated in parallel, and there is line-of-sight data corresponding to each of a plurality of images included in the moving image. It should be noted that a moving image in which a pointer indicating the user's line of sight is combined may be generated by the cooperation of the photographing unit 222 and the line-of-sight detection unit 223. The line-of-sight detection unit 223 may detect the locus of the line-of-sight movement pattern. In addition, the control unit 22 may analyze each of a plurality of images included in the moving image photographed by the photographing unit 222 and detect a composition indicated by the image.
また、本実施形態では、視線データに、位置情報取得部221が取得した端末位置を示す情報が含まれることとしたが、これに限らない。例えば、視線データに端末位置を含ませずに、制御部22が、動画に含まれている画像それぞれの、動画における再生位置と、端末位置とを関連付けた情報を生成してもよい。
In the present embodiment, the line-of-sight data includes information indicating the terminal position acquired by the position information acquisition unit 221. However, the present invention is not limited to this. For example, without including the terminal position in the line-of-sight data, the control unit 22 may generate information that associates the reproduction position in the moving image and the terminal position of each of the images included in the moving image.
送信部224は、認知度算出装置1等の外部装置と無線通信を行う。送信部224は、認知度算出装置1から、動画及び視線データの取得リクエストを受信したことに応じて、記憶部21に記憶されている、撮影部222が撮影した動画と、視線検出部223が生成した視線データとを認知度算出装置1に送信する。
The transmission unit 224 performs wireless communication with an external device such as the recognition degree calculation device 1. The transmission unit 224 receives the moving image and the line-of-sight data acquisition request received from the recognition degree calculation device 1, and the moving image captured by the imaging unit 222 and the line-of-sight detection unit 223 are stored in the storage unit 21. The generated line-of-sight data is transmitted to the recognition degree calculation device 1.
[認知度算出装置1の構成]
続いて、認知度算出装置1の構成について説明する。認知度算出装置1は、入力部11と、表示部12と、記憶部13と、制御部14とを備える。
入力部11は、例えば、キーボードやマウス等によって構成される。入力部11は、認知度算出装置1の操作者から操作入力を受け付ける。なお、認知度算出装置1がウェアラブル端末2と通信可能である場合には、認知度算出装置1の入力部11は、ボタンやセンサー端子等によって構成され、当該ボタンやセンサー端子等によって、ウェアラブル端末2への視線入力、ウェアラブル端末2が撮影可能な範囲(ユーザの視界内)でのユーザの手等によるジェスチャー入力を受け付けてもよい。
表示部12は、例えば、液晶ディスプレイや有機EL(Electro-Luminescence)ディスプレイ等により構成される。表示部12は、制御部14の制御に応じて、例えば、ユーザが視認した動画や、ユーザのメンタルモデルや、ユーザの都市に対する認知度等を表示する。 [Configuration of recognition level calculation device 1]
Then, the structure of the recognition degree calculation apparatus 1 is demonstrated. The recognition level calculation device 1 includes aninput unit 11, a display unit 12, a storage unit 13, and a control unit 14.
Theinput unit 11 is configured by, for example, a keyboard and a mouse. The input unit 11 receives an operation input from an operator of the recognition degree calculation device 1. In addition, when the recognition degree calculation apparatus 1 can communicate with the wearable terminal 2, the input unit 11 of the recognition degree calculation apparatus 1 includes a button, a sensor terminal, and the like, and the wearable terminal includes the button, the sensor terminal, and the like. Line-of-sight input to 2 and gesture input by the user's hand or the like within the range (within the user's field of view) that can be photographed by wearable terminal 2 may be accepted.
Thedisplay unit 12 includes, for example, a liquid crystal display or an organic EL (Electro-Luminescence) display. The display unit 12 displays, for example, the moving image visually recognized by the user, the user's mental model, the user's degree of recognition with respect to the city, and the like under the control of the control unit 14.
続いて、認知度算出装置1の構成について説明する。認知度算出装置1は、入力部11と、表示部12と、記憶部13と、制御部14とを備える。
入力部11は、例えば、キーボードやマウス等によって構成される。入力部11は、認知度算出装置1の操作者から操作入力を受け付ける。なお、認知度算出装置1がウェアラブル端末2と通信可能である場合には、認知度算出装置1の入力部11は、ボタンやセンサー端子等によって構成され、当該ボタンやセンサー端子等によって、ウェアラブル端末2への視線入力、ウェアラブル端末2が撮影可能な範囲(ユーザの視界内)でのユーザの手等によるジェスチャー入力を受け付けてもよい。
表示部12は、例えば、液晶ディスプレイや有機EL(Electro-Luminescence)ディスプレイ等により構成される。表示部12は、制御部14の制御に応じて、例えば、ユーザが視認した動画や、ユーザのメンタルモデルや、ユーザの都市に対する認知度等を表示する。 [Configuration of recognition level calculation device 1]
Then, the structure of the recognition degree calculation apparatus 1 is demonstrated. The recognition level calculation device 1 includes an
The
The
記憶部13は、例えば、ROM、RAM、ハードディスク、及び認知度算出装置1に接続された外部記憶装置等により構成される。ここで、外部記憶装置は、認知度算出装置1に直接接続されていてもよいし、通信ネットワーク(不図示)を介して通信可能に接続されていてもよい。
The storage unit 13 includes, for example, a ROM, a RAM, a hard disk, and an external storage device connected to the recognition degree calculation device 1. Here, the external storage device may be directly connected to the degree-of-recognition calculation device 1 or may be communicably connected via a communication network (not shown).
記憶部13は、コンピュータを、後述する取得部141、要素検出部142、位置特定部143、作成部144、及び算出部145として機能させる認知度算出プログラムを記憶する。また、記憶部13は、ウェアラブル端末2から取得した動画や視線データを記憶する。
The storage unit 13 stores a recognition degree calculation program that causes the computer to function as an acquisition unit 141, an element detection unit 142, a position specifying unit 143, a creation unit 144, and a calculation unit 145, which will be described later. In addition, the storage unit 13 stores moving images and line-of-sight data acquired from the wearable terminal 2.
制御部14は、例えばCPUにより構成される。制御部14は、取得部141と、要素検出部142と、位置特定部143と、作成部144と、算出部145とを備える。以下、制御部14が備える機能について、フローチャートを参照しながら説明する。
図3は、第1の実施形態に係る認知度算出装置1が認知度を算出する処理の流れを示すフローチャートである。 Thecontrol part 14 is comprised by CPU, for example. The control unit 14 includes an acquisition unit 141, an element detection unit 142, a position specification unit 143, a creation unit 144, and a calculation unit 145. Hereinafter, functions provided in the control unit 14 will be described with reference to flowcharts.
FIG. 3 is a flowchart showing a flow of processing in which the recognition degree calculation device 1 according to the first embodiment calculates the recognition degree.
図3は、第1の実施形態に係る認知度算出装置1が認知度を算出する処理の流れを示すフローチャートである。 The
FIG. 3 is a flowchart showing a flow of processing in which the recognition degree calculation device 1 according to the first embodiment calculates the recognition degree.
[動画及び視線の動きのパターンの取得]
まず、取得部141は、都市又は建築物の空間経路を示す動画と、当該動画に対応するユーザの視線の動きのパターンとを取得する(S1)。具体的には、取得部141は、ウェアラブル端末2に動画及び視線データの取得リクエストを送信し、ウェアラブル端末2から動画及び視線データを受信する。 [Acquisition of motion and eye movement patterns]
First, theacquisition part 141 acquires the moving image which shows the space route of a city or a building, and the motion pattern of the user's eyes | visual_axis corresponding to the said moving image (S1). Specifically, the acquisition unit 141 transmits a moving image and line-of-sight data acquisition request to the wearable terminal 2 and receives the moving image and line-of-sight data from the wearable terminal 2.
まず、取得部141は、都市又は建築物の空間経路を示す動画と、当該動画に対応するユーザの視線の動きのパターンとを取得する(S1)。具体的には、取得部141は、ウェアラブル端末2に動画及び視線データの取得リクエストを送信し、ウェアラブル端末2から動画及び視線データを受信する。 [Acquisition of motion and eye movement patterns]
First, the
続いて、取得部141は、取得した視線データが示すユーザの視線の動きを解析し、視線データが示す視線の動きのパターンを特定する。図4は、第1の実施形態に係る空間経路を示す動画に含まれる一風景におけるユーザの視線を示す図である。図4には、左目の視線を示す左視線VLと、右目の視線を示す右視線VRと、左視線VLと右視線VRとの中間を示す中間点VIとが表示されている。取得部141は、例えば、中間点VIの軌跡を解析することにより、ユーザの視線の動きのパターンとして、見回し、確認、注視を特定する。なお、図4に示す例では、左視線VL、右視線VR、中間点VIを表示したが、これに限らず、一つの視線だけが表示されるようにしてもよい。
Subsequently, the acquisition unit 141 analyzes the movement of the line of sight of the user indicated by the acquired line-of-sight data, and specifies the pattern of movement of the line of sight indicated by the line-of-sight data. FIG. 4 is a diagram illustrating a user's line of sight in one landscape included in the moving image illustrating the spatial route according to the first embodiment. In FIG. 4, a left line of sight VL indicating the line of sight of the left eye, a right line of sight VR indicating the line of sight of the right eye, and an intermediate point VI indicating an intermediate point between the left line of sight VL and the right line of sight VR are displayed. For example, the acquisition unit 141 analyzes the trajectory of the intermediate point VI to look around, specify confirmation, and gaze as the movement pattern of the user's line of sight. In the example shown in FIG. 4, the left line of sight VL, the right line of sight VR, and the intermediate point VI are displayed. However, the present invention is not limited to this, and only one line of sight may be displayed.
「見回し」は、例えば、同じ視認対象の1/5秒以下の視認である。「見回し」は、主としてユーザが自身を空間に定位する視認である。また、「見回し」は、例えば、ユーザが任意の要素が含まれる範囲を見ているだけでその要素が何であるかを把握していない状態である。取得部141は、同じ視認対象の視認時間が1/5秒以下である場合に、当該位置に表示されている風景をユーザが見回したと判定する。
“Turning” is, for example, visual recognition of the same visual object for 1/5 second or less. “Turning” is a visual recognition that the user mainly locates himself / herself in space. In addition, the “look around” is a state in which, for example, the user does not grasp what an element is just by looking at a range including an arbitrary element. The acquisition part 141 determines with the user having looked around the scenery currently displayed on the said position, when the visual recognition time of the same visual recognition object is 1/5 second or less.
「確認」は、同じ視認対象の1/5秒より長くかつ1秒未満の視認である。「確認」は、ユーザが自身を空間に定位する視認、又はユーザが視認対象を定位する視認である。また、「確認」は、ユーザが、任意の要素が存在していることを把握しているものの、その要素が何であるのか把握していない状態である。
「注視」は、同じ視認対象の1秒以上の視認である。「注視」は、ユーザが視認対象の内容を確認する視認である。「注視」は、ユーザが、図形や形態や文字等(例えば、窓や扉等の形状、仕上げ材の模様、建物の形態、植物の形態、家具の形態、車の形態、人の形態、サインの情報、地図、コンピュータ画面等)の内容を読み取ることができる状態である。 “Confirmation” is a visual recognition of the same visual target longer than 1/5 second and shorter than 1 second. “Confirmation” is visual recognition in which the user localizes himself / herself in space, or visual recognition in which the user localizes the visual recognition target. “Confirmation” is a state in which the user knows that an arbitrary element exists but does not know what the element is.
“Gaze” is visual recognition of the same visual target for 1 second or longer. “Gaze” is visual recognition in which the user confirms the contents of the visual recognition target. “Gaze” means that the user has a figure, form, character, etc. (for example, a shape of a window or door, a pattern of a finishing material, a form of a building, a form of a plant, a form of furniture, a form of a car, a form of a person, a sign Information, map, computer screen, etc.) can be read.
「注視」は、同じ視認対象の1秒以上の視認である。「注視」は、ユーザが視認対象の内容を確認する視認である。「注視」は、ユーザが、図形や形態や文字等(例えば、窓や扉等の形状、仕上げ材の模様、建物の形態、植物の形態、家具の形態、車の形態、人の形態、サインの情報、地図、コンピュータ画面等)の内容を読み取ることができる状態である。 “Confirmation” is a visual recognition of the same visual target longer than 1/5 second and shorter than 1 second. “Confirmation” is visual recognition in which the user localizes himself / herself in space, or visual recognition in which the user localizes the visual recognition target. “Confirmation” is a state in which the user knows that an arbitrary element exists but does not know what the element is.
“Gaze” is visual recognition of the same visual target for 1 second or longer. “Gaze” is visual recognition in which the user confirms the contents of the visual recognition target. “Gaze” means that the user has a figure, form, character, etc. (for example, a shape of a window or door, a pattern of a finishing material, a form of a building, a form of a plant, a form of furniture, a form of a car, a form of a person, a sign Information, map, computer screen, etc.) can be read.
[ユーザが認識した要素の検出]
続いて、要素検出部142は、取得部141が取得した動画と、視線の動きのパターンとに基づいて、都市等においてユーザが認識したと推測される要素を検出する(S2)。 [Detection of elements recognized by the user]
Subsequently, theelement detection unit 142 detects an element that is estimated to be recognized by the user in a city or the like based on the moving image acquired by the acquisition unit 141 and the line-of-sight movement pattern (S2).
続いて、要素検出部142は、取得部141が取得した動画と、視線の動きのパターンとに基づいて、都市等においてユーザが認識したと推測される要素を検出する(S2)。 [Detection of elements recognized by the user]
Subsequently, the
ここで、要素としては、ケヴィン・リンチ(Kevin Lynch)が提唱する、パス、エッジ、ディストリクト、ノード及びランドマークが挙げられる。パス、エッジ、ディストリクト、ノード及びランドマークは、以下のように定義されている。
「パス」は、ユーザが日ごろ又は時々通る、若しくはユーザが通る可能性のある道筋である。
「エッジ」は、線状の要素のうち、パスとは異なる線状の要素であり、例えば、海岸線、鉄道の線路、開発地の縁、壁等、2つの要素の間にある境界である。
「ディストリクト」は、中から大の大きさを有する都市の一部分を示すとともに、2次元の広がりをもつ要素である。ユーザは、「ディストリクト」の内部に入るものであり、通常は、「ディストリクト」は、内部から認識されるものであるが、外から視認可能な場合には、外からも参照される。
「ノード」は、都市の内部における主要な地点である。ユーザは、「ノード」の中にはいることができるとともに、「ノード」に向かったり、「ノード」から出発したりすることができる。
「ランドマーク」は、ユーザが外部から視認する物であり、例えば、建物、看板、商店、山等、単純に定義される物である。「ランドマーク」は、ユーザが内部に入らずに、外部から視認する物である。 Here, the elements include paths, edges, districts, nodes, and landmarks proposed by Kevin Lynch. Paths, edges, districts, nodes, and landmarks are defined as follows.
A “pass” is a path that a user may take on a daily or occasional basis or that a user may pass.
An “edge” is a linear element different from a path among linear elements, for example, a boundary between two elements such as a coastline, a railroad track, an edge of a development site, and a wall.
“District” refers to a portion of a city having a medium to large size, and is an element having a two-dimensional extent. The user enters the inside of the “district”. Normally, the “district” is recognized from the inside, but if it is visible from the outside, it is also referred to from the outside.
A “node” is a major point within a city. The user can enter the “node”, go to the “node”, and start from the “node”.
“Landmarks” are things that the user visually recognizes from the outside, and are simply defined things such as buildings, signboards, shops, and mountains. A “landmark” is a thing that the user can visually recognize from the outside without entering the inside.
「パス」は、ユーザが日ごろ又は時々通る、若しくはユーザが通る可能性のある道筋である。
「エッジ」は、線状の要素のうち、パスとは異なる線状の要素であり、例えば、海岸線、鉄道の線路、開発地の縁、壁等、2つの要素の間にある境界である。
「ディストリクト」は、中から大の大きさを有する都市の一部分を示すとともに、2次元の広がりをもつ要素である。ユーザは、「ディストリクト」の内部に入るものであり、通常は、「ディストリクト」は、内部から認識されるものであるが、外から視認可能な場合には、外からも参照される。
「ノード」は、都市の内部における主要な地点である。ユーザは、「ノード」の中にはいることができるとともに、「ノード」に向かったり、「ノード」から出発したりすることができる。
「ランドマーク」は、ユーザが外部から視認する物であり、例えば、建物、看板、商店、山等、単純に定義される物である。「ランドマーク」は、ユーザが内部に入らずに、外部から視認する物である。 Here, the elements include paths, edges, districts, nodes, and landmarks proposed by Kevin Lynch. Paths, edges, districts, nodes, and landmarks are defined as follows.
A “pass” is a path that a user may take on a daily or occasional basis or that a user may pass.
An “edge” is a linear element different from a path among linear elements, for example, a boundary between two elements such as a coastline, a railroad track, an edge of a development site, and a wall.
“District” refers to a portion of a city having a medium to large size, and is an element having a two-dimensional extent. The user enters the inside of the “district”. Normally, the “district” is recognized from the inside, but if it is visible from the outside, it is also referred to from the outside.
A “node” is a major point within a city. The user can enter the “node”, go to the “node”, and start from the “node”.
“Landmarks” are things that the user visually recognizes from the outside, and are simply defined things such as buildings, signboards, shops, and mountains. A “landmark” is a thing that the user can visually recognize from the outside without entering the inside.
具体的には、まず、要素検出部142は、動画に含まれる複数の画像のそれぞれを解析することにより、当該画像に対応する構図として、一点透視の構図や、二点透視の構図を特定する。より具体的には、例えば、要素検出部142は、動画に含まれる複数の画像のそれぞれの画素値を明度で二値化し、二値の境界線を、透視図法の線と特定する。そして、要素検出部142は、画像において特定した複数の透視図法の線の組み合わせに基づいて、当該画像に対応する構図を特定する。図5は、第1の実施形態に係る画像に対して透視図法の線が特定された例を示す図である。図5では、画像に対して一点透視の構図が特定されたことが確認できる。
Specifically, first, the element detection unit 142 analyzes each of a plurality of images included in the moving image, and identifies a one-point perspective composition or a two-point perspective composition as a composition corresponding to the image. . More specifically, for example, the element detection unit 142 binarizes each pixel value of a plurality of images included in the moving image by brightness, and specifies a binary boundary line as a perspective projection line. Then, the element detection unit 142 specifies a composition corresponding to the image based on a combination of a plurality of perspective projection lines specified in the image. FIG. 5 is a diagram illustrating an example in which a perspective projection line is specified for an image according to the first embodiment. In FIG. 5, it can be confirmed that a one-point perspective composition is specified for the image.
なお、要素検出部142は、画素値を二値化することにより、画像に描画可能な透視図法の線を特定したが、これに限らない。要素検出部142は、遠近法構造を用いた図形や座標軸の分析、彩度に基づく画素の多値化、ハフ変換のアルゴリズムを用いた画像からの線の抽出、画像が撮影された時刻の前後に撮影された画像の画素値の変化に基づく画素の多元化、或いはこれらを複合して、描画可能な透視図法の線を特定してもよい。さらに、要素検出部142は、予め透視図法の構図に対応する複数の画像を記憶部13に記憶させておき、取得部141が取得した動画に含まれる複数の画像を、記憶部13に記憶されている画像と照合することにより、透視図法の構図を特定してもよい。
In addition, although the element detection part 142 specified the line | wire of the perspective projection which can be drawn on an image by binarizing a pixel value, it is not restricted to this. The element detection unit 142 analyzes figures and coordinate axes using a perspective structure, multi-values of pixels based on saturation, extraction of lines from an image using a Hough transform algorithm, and before and after the time when the image was taken In addition, it is possible to specify a plurality of pixels based on a change in the pixel value of the captured image, or to combine them, and to specify a line of the perspective projection that can be drawn. Furthermore, the element detection unit 142 stores a plurality of images corresponding to the composition of the perspective projection in the storage unit 13 in advance, and the plurality of images included in the moving image acquired by the acquisition unit 141 are stored in the storage unit 13. The composition of the perspective projection method may be specified by collating with the image that is present.
続いて、要素検出部142は、特定した構図と、ユーザの視線の動きのパターンとに基づいて、ユーザが認識したと推測される要素を検出する。具体的には、要素検出部142は、図6Aに示される、要素と、構図のパターンと、視線の動きのパターンとを関連付けた要素特定情報を参照することにより、ユーザが認識したと推測される要素を検出する。例えば、動画に含まれる画像の構図が一点透視の構図であり、当該画像に対応する視線の動きのパターンが注視である場合、要素検出部142は、ユーザが認識したと推測される要素として「パス」を検出する。
Subsequently, the element detection unit 142 detects an element presumed to be recognized by the user based on the identified composition and the movement pattern of the user's line of sight. Specifically, the element detection unit 142 is assumed to be recognized by the user by referring to the element specifying information that associates the element, the composition pattern, and the line-of-sight movement pattern illustrated in FIG. 6A. Detect elements. For example, when the composition of the image included in the moving image is a one-point perspective composition, and the line-of-sight movement pattern corresponding to the image is gaze, the element detection unit 142 uses “ "Path" is detected.
なお、要素検出部142は、位相に基づいて要素を特定するようにしてもよい。ここで、位相とは、ユーザが視覚により知覚した身の回りにおける様々な物理的な事象により構築される幾何学的な領域であり、内部や外部といった概念を導くことができるものである。例えば、ユーザは、任意の対象(要素)を内側から意識して包まれる等の感情を伴って内部にいると意識できる場合や、任意の対象(要素)を外側から意識して開放感等の感情を伴って外部にいると意識できる場合がある。これに対して、要素検出部142は、例えば、動画に含まれる画像の構図と、当該構図における座標軸とに基づいて、位相を検出する。なお、要素検出部142は、さらに、視線の動きのパターンや、当該画像においてユーザが視認した対象物の位置に基づいて位相を検出してもよい。例えば、視線の動きのパターンが見回しであるとともに、見回しの角度が狭い場合には、ユーザが、見回した対象物の内部に位相が存在する傾向にあり、見回しの角度が広い場合には、ユーザが、見回した対象物の外部に位相が存在する傾向にある。また、視線の動きのパターンが対象物に対する複数の確認である場合、ユーザが対象物の内部に位相が存在する傾向にある。また、視線の動きのパターンが注視である場合、ユーザが注視した対象物の外部に位相が存在する傾向にある。
Note that the element detection unit 142 may specify an element based on the phase. Here, the phase is a geometric area constructed by various physical events perceived by the user visually, and a concept such as the inside or the outside can be derived. For example, when the user can be conscious of being inside with an emotion such as being consciously wrapped in an arbitrary object (element) or conscious of an arbitrary object (element) from the outside You may be conscious of being outside with emotion. On the other hand, the element detection part 142 detects a phase, for example based on the composition of the image contained in a moving image, and the coordinate axis in the said composition. The element detection unit 142 may further detect the phase based on the movement pattern of the line of sight and the position of the object visually recognized by the user in the image. For example, when the line-of-sight movement pattern is looking around and the look angle is narrow, the user tends to have a phase inside the looked object, and when the look angle is wide, the user However, there is a tendency that a phase exists outside the object that is looked around. Further, when the line-of-sight movement pattern is a plurality of confirmations on the object, the user tends to have a phase inside the object. Further, when the line-of-sight movement pattern is gaze, the phase tends to exist outside the object that the user gazes at.
図6Bは、要素特定情報の他の一例を示す図である。図6Bに示す要素特定情報では、要素と、構図のパターンと、視線の動きのパターンと、位相とが関連付けられている。要素検出部142は、視線の動きのパターンと、構図のパターンとの組み合わせに対して、複数の要素が関連付けられている場合であっても、位相に基づいて、一つの要素に特定することができる。
FIG. 6B is a diagram illustrating another example of the element specifying information. In the element specifying information shown in FIG. 6B, an element, a composition pattern, a line-of-sight movement pattern, and a phase are associated with each other. Even if a plurality of elements are associated with the combination of the line-of-sight movement pattern and the composition pattern, the element detection unit 142 can specify one element based on the phase. it can.
また、要素検出部142は、ウェアラブル端末2によって撮影した動画に含まれる複数の画像の構図のパターンを特定し、当該画像の構図のパターンにさらに基づいて要素を検出してもよい。例えば、複数の画像の構図のパターンと、要素を示す情報とを関連付けて記憶部13に記憶させておき、要素検出部142が特定した構図のパターンに関連付けられている要素を特定してもよい。また、要素検出部142は、構図のパターンから、幾何学的な座標軸の形成され得る領域を検出し、当該領域と、視認の動きのパターンと、位相とに基づいて都市の要素を検出してもよい。
In addition, the element detection unit 142 may specify a composition pattern of a plurality of images included in a moving image photographed by the wearable terminal 2, and may detect an element based on the composition pattern of the image. For example, a composition pattern of a plurality of images and information indicating an element may be associated with each other and stored in the storage unit 13, and the element associated with the composition pattern identified by the element detection unit 142 may be identified. . The element detection unit 142 detects an area where a geometric coordinate axis can be formed from the composition pattern, and detects an element of the city based on the area, a visual movement pattern, and a phase. Also good.
[検出された要素の位置の特定]
続いて、位置特定部143は、要素検出部142が検出した要素の都市等における位置を特定する(S3)。具体的には、位置特定部143は、検出された要素に対応する視線データに含まれている位置情報が示す位置を、当該要素の位置として特定する。 [Locating detected elements]
Subsequently, theposition specifying unit 143 specifies the position of the element detected by the element detecting unit 142 in a city or the like (S3). Specifically, the position specifying unit 143 specifies the position indicated by the position information included in the line-of-sight data corresponding to the detected element as the position of the element.
続いて、位置特定部143は、要素検出部142が検出した要素の都市等における位置を特定する(S3)。具体的には、位置特定部143は、検出された要素に対応する視線データに含まれている位置情報が示す位置を、当該要素の位置として特定する。 [Locating detected elements]
Subsequently, the
なお、位置特定部143は、動画に含まれている画像のそれぞれに、当該画像の位置情報(撮影位置)が関連付けられている情報をウェアラブル端末2から取得してもよい。この場合において、位置特定部143は、動画に含まれる画像のうち、要素検出部142が検出した要素を含む画像に関連付けられている画像の撮影位置に基づいて、要素の位置を特定してもよい。
Note that the position specifying unit 143 may acquire, from the wearable terminal 2, information in which each image included in the moving image is associated with position information (shooting position) of the image. In this case, the position specifying unit 143 may specify the position of the element based on the shooting position of the image associated with the image including the element detected by the element detecting unit 142 among the images included in the moving image. Good.
また、位置特定部143は、視線データに含まれる端末位置と、ユーザの視認方向とに基づいて、要素の位置を調整してもよい。例えば、画像は、ユーザの視認方向の風景を示していることから、当該画像に含まれている要素は、視線データが示す位置そのものではなく、ユーザの視認方向に位置している。このため、位置特定部143は、視線データに含まれている視認方向に基づいて、要素の位置を、特定した位置情報が示す位置から視線方向に対応する位置に調整してもよい。
Further, the position specifying unit 143 may adjust the position of the element based on the terminal position included in the line-of-sight data and the viewing direction of the user. For example, since the image shows a landscape in the viewing direction of the user, the elements included in the image are not in the position indicated by the line-of-sight data but in the viewing direction of the user. For this reason, the position specifying unit 143 may adjust the position of the element from the position indicated by the specified position information to a position corresponding to the line-of-sight direction based on the viewing direction included in the line-of-sight data.
また、動画に含まれている画像のそれぞれの位置情報が特定できず、動画の撮影を開始した位置のみ特定できる場合、位置特定部143は、動画に含まれる、連続する複数の画像に基づいて、要素の位置を特定してもよい。例えば、位置特定部143は、連続する複数の画像を解析することにより、ユーザの移動方向を特定し、当該移動方向と、動画の撮影を開始した位置とに基づいて要素を検出した画像に対応する位置を特定する。そして、位置特定部143は、当該位置に基づいて要素の位置を特定する。
In addition, when the position information of each of the images included in the moving image cannot be specified and only the position where the shooting of the moving image can be specified, the position specifying unit 143 is based on a plurality of continuous images included in the moving image. The position of the element may be specified. For example, the position specifying unit 143 analyzes a plurality of continuous images, specifies the moving direction of the user, and corresponds to the image in which the element is detected based on the moving direction and the position where the shooting of the moving image is started. Specify the position to perform. And the position specific | specification part 143 specifies the position of an element based on the said position.
図7は、複数の画像に基づいて要素の位置を特定する例を示す図である。図7に示すように、動画に対して、連続する4つの画像(A)~(D)が含まれているとする。例えば、位置特定部143は、画像(A)~(D)を解析して、(a)~(d)に示すように、それぞれの画像に含まれる物体や、輪郭線を検出する。例えば、図7(a)~(d)では、画像(A)~(D)を解析した結果、2本の輪郭線と、物体X~Zが検出されていることが確認できる。そして、位置特定部143は、複数の画像のそれぞれにおける物体及び輪郭線の位置に基づいて、ユーザの進行方向及び移動速度を特定し、動画の撮影を開始した位置と、特定した進行方向と、特定した移動速度とに基づいて、複数の画像のそれぞれに対応する位置を特定する。
FIG. 7 is a diagram showing an example of specifying the position of an element based on a plurality of images. As shown in FIG. 7, it is assumed that four continuous images (A) to (D) are included in the moving image. For example, the position specifying unit 143 analyzes the images (A) to (D) and detects objects and contour lines included in the respective images as shown in (a) to (d). For example, in FIGS. 7A to 7D, as a result of analyzing the images (A) to (D), it can be confirmed that two contour lines and the objects X to Z are detected. Then, the position specifying unit 143 specifies the moving direction and moving speed of the user based on the position of the object and the contour line in each of the plurality of images, and the position where the shooting of the moving image is started, the specified moving direction, Based on the specified moving speed, the position corresponding to each of the plurality of images is specified.
[メンタルモデルの作成]
続いて、作成部144は、要素検出部142が検出した要素と、位置特定部143が特定した要素の位置とに基づいて、ユーザが都市等に対して思い描く地図を示すメンタルモデルを作成する(S4)。例えば、作成部144は、無地の図面に対して、位置特定部143が特定した要素の位置に基づいて、要素検出部142が検出した要素を示す画像を描画することにより、図8に示すようなメンタルモデルを生成する。 [Mental model creation]
Subsequently, thecreation unit 144 creates a mental model indicating a map that the user envisions for a city or the like based on the element detected by the element detection unit 142 and the position of the element specified by the position specifying unit 143 ( S4). For example, the creation unit 144 draws an image indicating the element detected by the element detection unit 142 on the plain drawing based on the position of the element specified by the position specification unit 143 as shown in FIG. A simple mental model.
続いて、作成部144は、要素検出部142が検出した要素と、位置特定部143が特定した要素の位置とに基づいて、ユーザが都市等に対して思い描く地図を示すメンタルモデルを作成する(S4)。例えば、作成部144は、無地の図面に対して、位置特定部143が特定した要素の位置に基づいて、要素検出部142が検出した要素を示す画像を描画することにより、図8に示すようなメンタルモデルを生成する。 [Mental model creation]
Subsequently, the
なお、作成部144は、図9に示すように、都市等の空間経路を含み、要素等の情報が付されていない電子地図(白地図)に対して、位置特定部143が特定した要素の位置に基づいて、要素検出部142が検出した要素を示す画像を描画することにより、メンタルモデルを作成するようにしてもよい。
As shown in FIG. 9, the creation unit 144 includes a spatial route such as a city, and the position of the element specified by the position specification unit 143 with respect to an electronic map (white map) to which no information such as an element is attached. Based on the above, a mental model may be created by drawing an image indicating the element detected by the element detection unit 142.
また、作成部144は、地図そのものを作成せずに、要素検出部142が検出した複数の要素のそれぞれと、位置特定部143が特定した当該要素の位置を示す位置情報とを関連付けた情報を、メンタルモデルとして生成してもよい。
なお、作成部144は、メンタルモデルとして2次元の地図モデルとして作成するものとするが、3次元の地図モデルを作成するようにしてもよい。 In addition, thecreation unit 144 does not create the map itself, but associates each of the plurality of elements detected by the element detection unit 142 with the position information indicating the position of the element specified by the position specification unit 143. Alternatively, it may be generated as a mental model.
Thecreation unit 144 creates a 2D map model as a mental model, but may create a 3D map model.
なお、作成部144は、メンタルモデルとして2次元の地図モデルとして作成するものとするが、3次元の地図モデルを作成するようにしてもよい。 In addition, the
The
図10は、3次元のメンタルモデルの作成例を示す図である。まず、要素検出部142は、画像(a)に基づいて、画像内の構図や図形から幾何学的座標軸を推測する。探索開始に時は空間定位が不十分であり、要素や、要素が存在する方位が曖昧である。
続いて、要素検出部142は、画像(b)においてユーザが対象物として道路を示す領域を連続確認したことを検出したとする。この場合、ユーザが対象物に対して内部的位相を感じる幾何学的座標軸が認識され、メンタルモデルにパスが生成される。 FIG. 10 is a diagram illustrating an example of creating a three-dimensional mental model. First, theelement detection unit 142 estimates a geometric coordinate axis from the composition or figure in the image based on the image (a). At the start of the search, the spatial localization is insufficient, and the element and the direction in which the element exists are ambiguous.
Subsequently, it is assumed that theelement detection unit 142 detects that the user has continuously confirmed a region indicating a road as an object in the image (b). In this case, a geometric coordinate axis in which the user feels an internal phase with respect to the object is recognized, and a path is generated in the mental model.
続いて、要素検出部142は、画像(b)においてユーザが対象物として道路を示す領域を連続確認したことを検出したとする。この場合、ユーザが対象物に対して内部的位相を感じる幾何学的座標軸が認識され、メンタルモデルにパスが生成される。 FIG. 10 is a diagram illustrating an example of creating a three-dimensional mental model. First, the
Subsequently, it is assumed that the
続いて、要素検出部142は、画像(c)においてユーザが対象物として道路の交差点を示す領域を注視したことを検出したとする。この場合、ユーザが対象物に対して外部的位相を感じる幾何学的座標軸が認識され、メンタルモデルにノードが生成される。
続いて、画像(d)において、ユーザが対象物として道路の横に存在する建築物を見回したとする。この場合において、ユーザが対象物に対して外部的位相を感じる幾何学的座標軸が認識され、メンタルモデルにディストリクトが生成される。 Subsequently, it is assumed that theelement detection unit 142 detects that the user gazes at an area indicating a road intersection as an object in the image (c). In this case, a geometric coordinate axis in which the user feels an external phase with respect to the object is recognized, and a node is generated in the mental model.
Subsequently, in the image (d), it is assumed that the user looks around a building that exists beside the road as an object. In this case, a geometric coordinate axis in which the user feels an external phase with respect to the object is recognized, and a district is generated in the mental model.
続いて、画像(d)において、ユーザが対象物として道路の横に存在する建築物を見回したとする。この場合において、ユーザが対象物に対して外部的位相を感じる幾何学的座標軸が認識され、メンタルモデルにディストリクトが生成される。 Subsequently, it is assumed that the
Subsequently, in the image (d), it is assumed that the user looks around a building that exists beside the road as an object. In this case, a geometric coordinate axis in which the user feels an external phase with respect to the object is recognized, and a district is generated in the mental model.
続いて、画像(e)において、ユーザが対象物として道路を示す領域を見回したとする。この場合、ユーザが対象物に対して内部的位相を感じる幾何学的座標軸が認識され、メンタルモデルにパスが生成される。
続いて、画像(f)において、ユーザが対象物として、画像(e)においてパスと特定された道路の先に存在する建築物を注視したとする。この場合、ユーザが対象物に対して外部的位相を感じる幾何学的座標軸が認識され、メンタルモデルにランドマークが生成される。 Subsequently, in the image (e), it is assumed that the user looks around an area indicating a road as an object. In this case, a geometric coordinate axis in which the user feels an internal phase with respect to the object is recognized, and a path is generated in the mental model.
Subsequently, in the image (f), it is assumed that the user gazes at a building existing ahead of the road identified as a path in the image (e) as an object. In this case, a geometric coordinate axis in which the user feels an external phase with respect to the object is recognized, and a landmark is generated in the mental model.
続いて、画像(f)において、ユーザが対象物として、画像(e)においてパスと特定された道路の先に存在する建築物を注視したとする。この場合、ユーザが対象物に対して外部的位相を感じる幾何学的座標軸が認識され、メンタルモデルにランドマークが生成される。 Subsequently, in the image (e), it is assumed that the user looks around an area indicating a road as an object. In this case, a geometric coordinate axis in which the user feels an internal phase with respect to the object is recognized, and a path is generated in the mental model.
Subsequently, in the image (f), it is assumed that the user gazes at a building existing ahead of the road identified as a path in the image (e) as an object. In this case, a geometric coordinate axis in which the user feels an external phase with respect to the object is recognized, and a landmark is generated in the mental model.
続いて、画像(g)において、ユーザが対象物としてビルの合間から視認可能な建築物を確認したとする。この場合、メンタルモデルには、確認した位置に対応する位置にランドマークが生成される。3次元モデルを応用することで、より探索行動に合致した生成が可能になる。
Subsequently, in the image (g), it is assumed that the user confirms a building that can be visually recognized from between the buildings as an object. In this case, a landmark is generated in the mental model at a position corresponding to the confirmed position. By applying a three-dimensional model, it is possible to generate data that better matches the search behavior.
例えば、ユーザが遠方に存在する建築物等をランドマーク等として検出した場合に、2次元のメンタルモデルでは、当該ランドマークを表記できない可能性があるものの、3次元のメンタルモデルとすることにより、遠方に存在する建築物についても、メンタルモデルに反映することができる。これにより、設計者は、例えば、複数のユーザの当該ランドマーク等の認識状況を3次元のメンタルモデルによって確認することにより、当該ランドマーク等の都市等における影響度について評価することができる。
For example, when a user detects a building or the like that exists in the distance as a landmark or the like, the two-dimensional mental model may not be able to represent the landmark, but by using a three-dimensional mental model, Buildings that exist far away can also be reflected in the mental model. Thereby, the designer can evaluate the influence degree in the city etc. of the said landmark etc. by confirming the recognition condition of the said landmark etc. of a some user with a three-dimensional mental model, for example.
[認知度の算出]
続いて、算出部145は、作成部144が作成したメンタルモデルと、都市等の空間経路に対応する実際の要素及び要素の位置を示す地図(基準情報)とに基づいて、都市等のユーザに対する分かり易さを示す、ユーザの都市等に対する認知度を算出する(S5)。 [Calculation of awareness]
Subsequently, based on the mental model created by thecreation unit 144 and a map (reference information) showing the actual elements corresponding to the spatial route such as cities and the positions of the elements (reference information), the calculation unit 145 The user's degree of recognition with respect to the city, etc., showing the ease of understanding, is calculated (S5).
続いて、算出部145は、作成部144が作成したメンタルモデルと、都市等の空間経路に対応する実際の要素及び要素の位置を示す地図(基準情報)とに基づいて、都市等のユーザに対する分かり易さを示す、ユーザの都市等に対する認知度を算出する(S5)。 [Calculation of awareness]
Subsequently, based on the mental model created by the
具体的には、算出部145は、メンタルモデルに含まれる要素及び要素の位置と、図11に示すような実際の地図に含まれる要素及び要素の位置との一致度に基づいて認知度を算出する。より具体的には、算出部145は、メンタルモデルに含まれている要素及び要素の位置と、実際の地図に含まれる要素及び要素の位置とを比較する。そして、算出部145は、実際の地図に含まれる要素に対する、要素検出部142による要素の検出率と、実際の地図に含まれる要素の位置と、特定された位置との誤差とを算出する。そして、算出部145は、算出した検出率及び誤差の少なくともいずれかに基づいて認知度を算出する。
Specifically, the calculation unit 145 calculates the degree of recognition based on the degree of coincidence between the elements and element positions included in the mental model and the elements and element positions included in the actual map as illustrated in FIG. To do. More specifically, the calculation unit 145 compares the elements and element positions included in the mental model with the elements and element positions included in the actual map. Then, the calculation unit 145 calculates the detection rate of the element by the element detection unit 142 with respect to the element included in the actual map, and the error between the position of the element included in the actual map and the specified position. Then, the calculation unit 145 calculates the degree of recognition based on at least one of the calculated detection rate and error.
ここで、算出部145は、メンタルモデルに含まれる要素のうち、パス及びノードの要素のみに限定して、実際の地図に含まれる要素との一致度を算出してもよい。例えば、3次元のメンタルモデルを作成した場合において、比較対象の地図が2次元である場合には、実際の地図に含まれる要素との一致度を精度良く算出できないという問題が発生する。そこで、算出部145は、メンタルモデルにおける表記を単純化させることにより、パス及びノードの要素のみに限定したメンタルモデルに修正し、修正後のメンタルモデルと、実際の地図との一致度を算出してもよい。図12は、メンタルモデルを単純化する例を示す図である。図12(a)~(c)は、3次元のメンタルモデルの正面図である。算出部145は、まず、図12(a)に示すメンタルモデルから、パスと、パスに対応するエッジを特定し、図12(b)に示すようにパスを示す線を描画する。その後、算出部145は、図12(b)に示すように、3次元のメンタルモデルから、描画した線以外の要素を消去することにより、メンタルモデルを単純化させる。
算出部145は、算出した認知度を示す情報を、例えば、表示部12に表示させる。なお、算出部145は、算出した認知度を示す情報を記憶部13に記憶させてもよい。 Here, thecalculation unit 145 may calculate the degree of coincidence with the elements included in the actual map by limiting to only the elements of the path and the node among the elements included in the mental model. For example, when a three-dimensional mental model is created, if the comparison target map is two-dimensional, there is a problem that the degree of coincidence with elements included in the actual map cannot be calculated with high accuracy. Therefore, the calculation unit 145 corrects the notation in the mental model to correct the mental model limited to only the elements of the path and the node, and calculates the degree of coincidence between the corrected mental model and the actual map. May be. FIG. 12 is a diagram illustrating an example of simplifying the mental model. FIGS. 12A to 12C are front views of a three-dimensional mental model. First, the calculation unit 145 identifies a path and an edge corresponding to the path from the mental model illustrated in FIG. 12A, and draws a line indicating the path as illustrated in FIG. Thereafter, as illustrated in FIG. 12B, the calculation unit 145 simplifies the mental model by deleting elements other than the drawn lines from the three-dimensional mental model.
Thecalculation unit 145 causes the display unit 12 to display information indicating the calculated degree of recognition, for example. Note that the calculation unit 145 may store information indicating the calculated degree of recognition in the storage unit 13.
算出部145は、算出した認知度を示す情報を、例えば、表示部12に表示させる。なお、算出部145は、算出した認知度を示す情報を記憶部13に記憶させてもよい。 Here, the
The
[複雑度の算出]
続いて、算出部145は、要素検出部142が検出した要素の数に基づいて、都市又は建築物の複雑さを示す複雑度を算出する。例えば、複数の都市等を示す動画のそれぞれについて、単位時間や単位距離あたりに検出される要素の数を予め基準値として特定しておく。そして、算出部145は、取得部141が取得した動画の長さと、要素検出部142が検出した要素の数とに基づいて、当該動画において、単位時間あたりに検出される要素の数を算出し、当該要素の数と、基準値とに基づいて複雑度を算出してもよい。 [Calculation of complexity]
Subsequently, thecalculation unit 145 calculates the complexity indicating the complexity of the city or the building based on the number of elements detected by the element detection unit 142. For example, the number of elements detected per unit time or unit distance is specified in advance as a reference value for each moving image showing a plurality of cities or the like. Then, the calculation unit 145 calculates the number of elements detected per unit time in the video based on the length of the video acquired by the acquisition unit 141 and the number of elements detected by the element detection unit 142. The complexity may be calculated based on the number of elements and the reference value.
続いて、算出部145は、要素検出部142が検出した要素の数に基づいて、都市又は建築物の複雑さを示す複雑度を算出する。例えば、複数の都市等を示す動画のそれぞれについて、単位時間や単位距離あたりに検出される要素の数を予め基準値として特定しておく。そして、算出部145は、取得部141が取得した動画の長さと、要素検出部142が検出した要素の数とに基づいて、当該動画において、単位時間あたりに検出される要素の数を算出し、当該要素の数と、基準値とに基づいて複雑度を算出してもよい。 [Calculation of complexity]
Subsequently, the
また、要素検出部142が複数のユーザのそれぞれに対応して検出した要素を実際の地図に反映させたメンタルモデル(集合知的なマップ)を生成してもよい。そして、算出部145が、当該メンタルモデルに含まれる要素に対して、要素検出部142が検出した要素の割合に基づいて、複雑度を算出してもよい。
Further, a mental model (collective intelligent map) in which elements detected by the element detection unit 142 corresponding to each of a plurality of users are reflected on an actual map may be generated. Then, the calculation unit 145 may calculate the complexity based on the ratio of the elements detected by the element detection unit 142 to the elements included in the mental model.
[第1の実施形態における効果]
以上、第1の実施形態によれば、認知度算出装置1は、取得した動画と、視線の動きのパターンとに基づいて、都市等においてユーザが認識したと推測される要素を検出するので、ユーザが、都市等に対するイメージを知覚していることを検出することができる。 [Effect in the first embodiment]
As mentioned above, according to 1st Embodiment, since the recognition degree calculation apparatus 1 detects the element estimated that the user recognized in the city etc. based on the acquired moving image and the pattern of the movement of a gaze, It can be detected that the user perceives an image of a city or the like.
以上、第1の実施形態によれば、認知度算出装置1は、取得した動画と、視線の動きのパターンとに基づいて、都市等においてユーザが認識したと推測される要素を検出するので、ユーザが、都市等に対するイメージを知覚していることを検出することができる。 [Effect in the first embodiment]
As mentioned above, according to 1st Embodiment, since the recognition degree calculation apparatus 1 detects the element estimated that the user recognized in the city etc. based on the acquired moving image and the pattern of the movement of a gaze, It can be detected that the user perceives an image of a city or the like.
そして、認知度算出装置1は、要素検出部142が検出した要素と、位置特定部143が特定した要素の位置とに基づいて、ユーザが都市等に対して思い描く地図を示すメンタルモデルを作成し、当該メンタルモデルと、実際の要素及び要素の位置を示す基準情報とに基づいて、ユーザの都市等に対する認知度を算出する。このようにすることで、認知度算出装置1は、ユーザが、都市等に対して抱いているイメージの大きさ、すなわち、都市等に対するユーザの分かり易さを指標化することができる。
Then, the recognition level calculation device 1 creates a mental model indicating a map that the user envisions for a city or the like based on the element detected by the element detection unit 142 and the position of the element specified by the position specifying unit 143. Based on the mental model and the reference information indicating the actual elements and the positions of the elements, the user's degree of recognition of the city or the like is calculated. By doing in this way, the degree-of-recognition calculation apparatus 1 can index the size of the image that the user has with respect to the city or the like, that is, the user's intelligibility with respect to the city or the like.
これにより、都市等の設計者は、認知度算出装置1が算出した認知度に基づいて、都市等がユーザに分かり易いか否かを評価することができる。そして、設計者は、ユーザに分かり易いと評価された都市等の構造を、新たな都市等の設計に適用したり、応用したりすることができる。
Thereby, a designer of a city or the like can evaluate whether or not the city or the like is easily understood by the user based on the recognition degree calculated by the recognition degree calculation device 1. Then, the designer can apply or apply the structure of a city or the like evaluated as easy to understand to the user to the design of a new city or the like.
例えば、地図の作成者は、例えば、ユーザに地図を渡して都市を探索させ、その結果として得られた都市の認知度に基づいて、ユーザに分かり易い地図を作成することができる。これにより、地図の作成者は、例えば、一般的に分かり易い地図の表記等を開発したりすることができる。建築物の設計者は、例えば、ユーザに建築物の間取り図を渡して建築物の内部を探索させ、その結果として得られた建築物の認知度に基づいて、間取り図がユーザにとって分かり易いものであるか否かを評価することができる。これにより、建築物の設計者は、例えば、一般的に分かり易い間取り図の表記等を開発したりすることができる。
For example, a map creator can, for example, give a user a map to search for a city, and create a map that is easy for the user to understand based on the recognition degree of the city obtained as a result. Thereby, the creator of the map can, for example, develop a map notation that is generally easy to understand. The architect of the building, for example, gives the user a floor plan of the building and searches the inside of the building, and the floor plan is easy for the user to understand based on the recognition degree of the resulting building. It can be evaluated whether or not. Thereby, the designer of a building can develop the notation of the floor plan etc. which are generally easy to understand, for example.
また、ユーザに対して認知度を通知することにより、ユーザが、都市等をどのくらい把握しているのかを認識させることができる。そして、例えば、認知度が低いユーザに対しては、上記地図の作成者が作成した地図を渡して、都市を探索させたり、建築物の設計者が作成した間取り図を渡して建築物内部を探索させることにより、都市等においてユーザが要素をバランスよく取得して探索できるようになり、ユーザの生理的な方向音痴や方向感覚を改善することができる。
Also, by notifying the user of the degree of recognition, it is possible to recognize how much the user knows the city or the like. And, for example, for users with low awareness, the map created by the map creator is handed over to search the city, or the floor plan created by the building designer is handed over the interior of the building. By searching, in a city or the like, the user can acquire and search for elements in a balanced manner, and the user's physiological direction sense and direction sense can be improved.
また、認知度算出装置1は、検出した要素の数に基づいて、都市又は建築物の複雑さを示す複雑度を算出する。従来は、都市や建築物に対するユーザの体験の独自性に関連する、都市や建築物の複雑さも指標化されていなかったため、設計者は、どのような設計が体験の独自性が高いものなのかを意識した設計を十分に行えないという問題があった。例えば、ユーザが都市等に対して感じる複雑さや体験の独自性が指標化されていないため、都市や公共建築等の設計検討時に、複雑さといった観点から設計案の評価ができず、設計者が提示した計画が合意され難いという問題があった。これに対して、認知度算出装置1は、複雑度を算出することができるので、設計者は、動画が示す都市が、他の都市等に対してどのくらい複雑なのかを評価し、体験の独自性が高い都市等の設計に適用したり、応用したりすることができる。また、設計者は、ユーザが複雑さや独自性が高いと感じることができる都市等を設計することができる。
Also, the recognition degree calculation device 1 calculates the complexity indicating the complexity of the city or the building based on the number of detected elements. Traditionally, the complexity of cities and buildings related to the uniqueness of the user's experience with cities and buildings has not been indexed, so designers can see what design is highly unique to the experience. There was a problem that it was not possible to fully design with awareness of the situation. For example, the complexity that users feel about cities, etc. and the uniqueness of the experience are not indexed, so it is not possible to evaluate design plans from the perspective of complexity when designing designs for cities, public buildings, etc. There was a problem that the proposed plan was difficult to agree. On the other hand, since the degree-of-recognition calculation apparatus 1 can calculate the complexity, the designer evaluates how complex the city indicated by the video is with respect to other cities, etc. It can be applied to the design of high-quality cities and the like. In addition, the designer can design a city or the like in which the user can feel the complexity and uniqueness.
なお、第1の実施形態において、算出部145は、メンタルモデルと、図11に示す実際の地図を基準情報として認知度を算出したが、複数のユーザの認知度の算出を行った結果、複数ユーザの認知度が低すぎたり、高すぎたりするという問題が発生し得る。そこで、認知度算出装置1の制御部は、複数のユーザの認知度、又は、複数のユーザのそれぞれの要素の検出度合いに基づいて、図13に示すように、要素検出部142が複数のユーザのそれぞれに対応して検出した要素を実際の地図に反映させたメンタルモデル(集合知的なマップ)を作成してもよい。そして、算出部145は、メンタルモデルに含まれる要素及び要素の位置と、複数のユーザが検出した要素を実際の地図に反映させたメンタルモデルに含まれる要素及び要素の位置とに基づいて認知度を算出してもよい。
In the first embodiment, the calculation unit 145 calculates the degree of recognition using the mental model and the actual map illustrated in FIG. 11 as reference information. However, as a result of calculating the degree of recognition of a plurality of users, There may be a problem that the user's recognition level is too low or too high. Therefore, the control unit of the recognition level calculation device 1 is configured such that the element detection unit 142 includes a plurality of users based on the recognition levels of a plurality of users or the detection levels of the respective elements of the plurality of users as illustrated in FIG. A mental model (collective intelligence map) may be created in which elements detected corresponding to each of these are reflected in an actual map. Then, the calculation unit 145 recognizes based on the elements and element positions included in the mental model and the elements and element positions included in the mental model in which elements detected by a plurality of users are reflected on an actual map. May be calculated.
<第2の実施形態>
[コンピュータ・グラフィック動画に基づく視線データを取得する]
続いて、第2の実施形態について説明する。第2の実施形態は、ウェアラブル端末2が、都市等における実際の空間経路をユーザが通過したときの視線を検出せずに、都市等を示すコンピュータ・グラフィック動画(以下、CG動画という。)をユーザが視認したときの視線を検出する点で第1の実施形態と異なる。以下、第1の実施形態と異なる部分について説明を行う。第1の実施形態と同じ部分については適宜説明を省略する。 <Second Embodiment>
[Obtain gaze data based on computer graphic animation]
Next, the second embodiment will be described. In the second embodiment, the wearable terminal 2 does not detect a line of sight when a user passes through an actual space route in a city or the like, and displays a computer graphic moving image (hereinafter referred to as a CG moving image) indicating the city or the like. It differs from the first embodiment in that the line of sight when the user visually recognizes is detected. Hereinafter, a different part from 1st Embodiment is demonstrated. The description of the same parts as those in the first embodiment will be omitted as appropriate.
[コンピュータ・グラフィック動画に基づく視線データを取得する]
続いて、第2の実施形態について説明する。第2の実施形態は、ウェアラブル端末2が、都市等における実際の空間経路をユーザが通過したときの視線を検出せずに、都市等を示すコンピュータ・グラフィック動画(以下、CG動画という。)をユーザが視認したときの視線を検出する点で第1の実施形態と異なる。以下、第1の実施形態と異なる部分について説明を行う。第1の実施形態と同じ部分については適宜説明を省略する。 <Second Embodiment>
[Obtain gaze data based on computer graphic animation]
Next, the second embodiment will be described. In the second embodiment, the wearable terminal 2 does not detect a line of sight when a user passes through an actual space route in a city or the like, and displays a computer graphic moving image (hereinafter referred to as a CG moving image) indicating the city or the like. It differs from the first embodiment in that the line of sight when the user visually recognizes is detected. Hereinafter, a different part from 1st Embodiment is demonstrated. The description of the same parts as those in the first embodiment will be omitted as appropriate.
第2の実施形態において、ウェアラブル端末2を装着したユーザは、動画表示装置(不図示)の表示部に表示される、都市等を示すCG動画を視認する。
In the second embodiment, a user wearing the wearable terminal 2 visually recognizes a CG video showing a city or the like displayed on a display unit of a video display device (not shown).
ウェアラブル端末2の視線検出部223は、CG動画に対するユーザの視線を検出する。視線検出部223は、視線を検出した時刻と、CG動画における当該視線の座標を示す視線位置とを関連付けた視線データを生成して記憶部21に記憶させる。ここで、視線を検出した時刻は、CG動画の再生時刻と関連付けられている。
The line-of-sight detection unit 223 of the wearable terminal 2 detects the line of sight of the user with respect to the CG video. The line-of-sight detection unit 223 generates line-of-sight data in which the time when the line of sight is detected and the line-of-sight position indicating the coordinates of the line of sight in the CG moving image are generated and stored in the storage unit 21. Here, the time when the line of sight is detected is associated with the reproduction time of the CG video.
認知度算出装置1の取得部141は、都市等の空間経路を示すCG動画と、当該CG動画に対応するユーザの視線の動きのパターンとを取得する。具体的には、取得部141は、動画表示装置からCG動画を取得するとともに、ウェアラブル端末2から当該CG動画に対応する視線データを取得する。そして、取得部141は、視線データに基づいて、ユーザの視線の動きのパターンを取得する。
The acquisition unit 141 of the recognition degree calculation device 1 acquires a CG moving image indicating a spatial route such as a city and a movement pattern of a user's line of sight corresponding to the CG moving image. Specifically, the acquisition unit 141 acquires a CG video from the video display device, and acquires line-of-sight data corresponding to the CG video from the wearable terminal 2. And the acquisition part 141 acquires a user's gaze movement pattern based on gaze data.
要素検出部142は、取得部141が取得したCG動画と、視線の動きのパターンとに基づいて、都市等においてユーザが認識したと推測される要素を検出する。
位置特定部143は、要素検出部142が検出した要素の都市等における位置を特定する。具体的には、位置特定部143は、視線データを参照し、要素が検出された際の視線位置に関連付けられている視線の検出時刻を特定する。そして、位置特定部143は、特定した視線の検出時刻に対応するCG動画の再生時刻を特定する。CG動画の複数の再生時刻には、都市等におけるそれぞれの位置を示す位置情報が関連付けられていることから、位置特定部143は、当該再生時刻に関連付けられている位置情報が示す位置を、当該要素の位置として特定する。 Theelement detection unit 142 detects an element estimated to be recognized by the user in a city or the like based on the CG moving image acquired by the acquisition unit 141 and the line-of-sight movement pattern.
Theposition specifying unit 143 specifies the position of the element detected by the element detecting unit 142 in a city or the like. Specifically, the position specifying unit 143 specifies line-of-sight detection time associated with the line-of-sight position when the element is detected with reference to the line-of-sight data. And the position specific | specification part 143 specifies the reproduction | regeneration time of the CG moving image corresponding to the detection time of the specified eyes | visual_axis. Since position information indicating each position in a city or the like is associated with a plurality of reproduction times of the CG video, the position specifying unit 143 determines the position indicated by the position information associated with the reproduction time. Specify the position of the element.
位置特定部143は、要素検出部142が検出した要素の都市等における位置を特定する。具体的には、位置特定部143は、視線データを参照し、要素が検出された際の視線位置に関連付けられている視線の検出時刻を特定する。そして、位置特定部143は、特定した視線の検出時刻に対応するCG動画の再生時刻を特定する。CG動画の複数の再生時刻には、都市等におけるそれぞれの位置を示す位置情報が関連付けられていることから、位置特定部143は、当該再生時刻に関連付けられている位置情報が示す位置を、当該要素の位置として特定する。 The
The
続いて、作成部144は、要素検出部142が検出した要素と、位置特定部143が特定した要素の位置とに基づいて、ユーザのメンタルモデルを作成する。算出部145は、作成部144が作成したメンタルモデルと、CG動画に対応する都市等に含まれる要素及び要素の位置を示す地図(基準情報)とに基づいて、ユーザの都市等に対する認知度を算出する。
Subsequently, the creation unit 144 creates a user mental model based on the element detected by the element detection unit 142 and the position of the element specified by the position specification unit 143. Based on the mental model created by the creation unit 144 and the map (reference information) indicating the elements included in the cities and the like corresponding to the CG video and the positions of the elements (reference information), the calculation unit 145 determines the degree of recognition of the user's cities and the like. calculate.
[第2の実施形態における効果]
以上、第2の実施形態によれば、認知度算出装置1は、都市等を示すCG動画と、当該CG動画に対応する視線の動きのパターンとに基づいて、当該CG動画が示す都市等においてユーザが認識したと推測される要素を検出する。CG動画を計画段階や開発途中の都市等を示すものとすることで、認知度算出装置1は、当該都市等に対してユーザが認識したと推定される要素を検出し、認知度を算出することができる。したがって、計画段階や開発途中の都市等の設計者は、当該都市等の計画段階や開発途中において、ユーザにとって分かり易いものであるか否かを評価することができる。 [Effects of Second Embodiment]
As described above, according to the second embodiment, the degree-of-recognition calculation apparatus 1 is based on a CG moving image indicating a city or the like and a line-of-sight movement pattern corresponding to the CG moving image. An element that is assumed to be recognized by the user is detected. By indicating the CG video as a planning stage or a city under development, the recognition level calculation device 1 detects an element estimated to be recognized by the user for the city and the like, and calculates the level of recognition. be able to. Therefore, a designer in a planning stage or a city under development can evaluate whether it is easy for the user to understand during the planning stage or development in the city or the like.
以上、第2の実施形態によれば、認知度算出装置1は、都市等を示すCG動画と、当該CG動画に対応する視線の動きのパターンとに基づいて、当該CG動画が示す都市等においてユーザが認識したと推測される要素を検出する。CG動画を計画段階や開発途中の都市等を示すものとすることで、認知度算出装置1は、当該都市等に対してユーザが認識したと推定される要素を検出し、認知度を算出することができる。したがって、計画段階や開発途中の都市等の設計者は、当該都市等の計画段階や開発途中において、ユーザにとって分かり易いものであるか否かを評価することができる。 [Effects of Second Embodiment]
As described above, according to the second embodiment, the degree-of-recognition calculation apparatus 1 is based on a CG moving image indicating a city or the like and a line-of-sight movement pattern corresponding to the CG moving image. An element that is assumed to be recognized by the user is detected. By indicating the CG video as a planning stage or a city under development, the recognition level calculation device 1 detects an element estimated to be recognized by the user for the city and the like, and calculates the level of recognition. be able to. Therefore, a designer in a planning stage or a city under development can evaluate whether it is easy for the user to understand during the planning stage or development in the city or the like.
なお、第2の実施形態では、ウェアラブル端末2が、CG動画をユーザが視認したときの視線データを取得することとしたが、これに限らない。例えば、ウェアラブル端末2は、都市等を予め撮影した動画をユーザが視認したときの視線データを取得してもよい。このようにすることで、認知度算出装置1は、都市等がある場所にユーザが行かなくても、ユーザの都市等に対する認知度を算出することができる。
In the second embodiment, the wearable terminal 2 acquires the line-of-sight data when the user visually recognizes the CG video, but the present invention is not limited to this. For example, the wearable terminal 2 may acquire line-of-sight data when a user visually recognizes a moving image obtained by photographing a city or the like in advance. By doing in this way, the recognition degree calculation apparatus 1 can calculate the recognition degree of the user with respect to the city or the like even if the user does not go to a place where the city or the like is.
また、CG動画は、都市等における予め定められた空間経路を示す動画であることとするが、これに限らない。例えば、CG動画は、空中等、実際にユーザが通過できない場所を示す動画であってもよい。このようにすることで、当該CG動画を視認したユーザは、普段は認知しない都市等のイメージを認識することができる。
Further, the CG video is a video showing a predetermined spatial route in a city or the like, but is not limited thereto. For example, the CG moving image may be a moving image indicating a place where the user cannot actually pass, such as in the air. By doing in this way, the user who has visually recognized the CG video can recognize an image of a city or the like that is not normally recognized.
また、CG動画は、例えば、予め設計された3次元空間モデルの任意の空間を表示することができるVR(Virtual Reality:仮想現実)形式やAR(Augmented Reality:拡張現実)形式のCG動画であってもよい。例えば、ユーザが確認したい方向を指示するためのコントローラ等を介して、ユーザから指示情報を受信したことに応じて、CGを変化させる形式の動画や、ユーザの視線を検出したことに応じて、当該視線に対応してCGを変化させる形式の動画や、CGは変化せずに、内部的にユーザからの指示情報と3次元CGの形状と関連付けてログとして保存し、事後的に確認できる形式の動画であってもよい。このようにすることで、認知度算出装置1は、様々な方向からユーザが一の要素を視認することができるので、ユーザが視認する要素に対応する3次元モデルを特定し易くなり、要素の検出精度を向上させることができる。
The CG video is, for example, a CG video in a VR (Virtual Reality) format or an AR (Augmented Reality: Augmented Reality) format that can display an arbitrary space of a pre-designed three-dimensional space model. May be. For example, in response to receiving instruction information from the user via a controller or the like for instructing the direction that the user wants to confirm, in response to detecting a moving image of a format that changes CG or the user's line of sight, A moving image in which the CG is changed corresponding to the line of sight, or a format in which the CG does not change and is stored as a log in association with the instruction information from the user and the shape of the three-dimensional CG, and can be confirmed later May be a video. By doing in this way, since the recognition degree calculation device 1 allows the user to visually recognize one element from various directions, it becomes easy to specify the three-dimensional model corresponding to the element visually recognized by the user. Detection accuracy can be improved.
<第3の実施形態>
[ウェアラブル端末2が撮影した動画に基づいて視線の動きのパターンを取得する]
続いて、第3の実施形態について説明する。第3の実施形態は、ウェアラブル端末2が視線データを生成せずに動画のみを生成し、認知度算出装置1が、ウェアラブル端末2から取得した動画に基づいて、ユーザの視線の動きのパターンを取得する点で第1の実施形態と異なる。第3の実施形態では、ウェアラブル端末2として、視線データを生成する機能を有していない端末を使用することを想定している。 <Third Embodiment>
[Acquires a line-of-sight movement pattern based on a video taken by the wearable terminal 2]
Subsequently, a third embodiment will be described. In the third embodiment, the wearable terminal 2 generates only the moving image without generating the line-of-sight data, and the recognition degree calculation device 1 displays the movement pattern of the user's line of sight based on the moving image acquired from the wearable terminal 2. It differs from the first embodiment in that it is acquired. In the third embodiment, it is assumed that a terminal that does not have a function of generating line-of-sight data is used as the wearable terminal 2.
[ウェアラブル端末2が撮影した動画に基づいて視線の動きのパターンを取得する]
続いて、第3の実施形態について説明する。第3の実施形態は、ウェアラブル端末2が視線データを生成せずに動画のみを生成し、認知度算出装置1が、ウェアラブル端末2から取得した動画に基づいて、ユーザの視線の動きのパターンを取得する点で第1の実施形態と異なる。第3の実施形態では、ウェアラブル端末2として、視線データを生成する機能を有していない端末を使用することを想定している。 <Third Embodiment>
[Acquires a line-of-sight movement pattern based on a video taken by the wearable terminal 2]
Subsequently, a third embodiment will be described. In the third embodiment, the wearable terminal 2 generates only the moving image without generating the line-of-sight data, and the recognition degree calculation device 1 displays the movement pattern of the user's line of sight based on the moving image acquired from the wearable terminal 2. It differs from the first embodiment in that it is acquired. In the third embodiment, it is assumed that a terminal that does not have a function of generating line-of-sight data is used as the wearable terminal 2.
第3の実施形態において、取得部141は、ウェアラブル端末2から、当該ウェアラブル端末2において撮影された都市等の空間経路を示す動画を取得する。
要素検出部142は、動画に含まれる複数の画像のそれぞれに対応する構図に基づいて、都市等においてユーザが認識したと推測される要素を検出する。具体的には、要素検出部142は、動画に含まれる複数の画像のそれぞれを解析することにより、当該画像に対応する構図を特定する。そして、要素検出部142は、図6A又は図6Bに示す要素特定情報を参照し、特定した構図に関連付けられている要素を特定することにより、ユーザが認識したと推測される要素を検出する。 In the third embodiment, theacquisition unit 141 acquires, from the wearable terminal 2, a moving image that indicates a spatial route of a city or the like photographed by the wearable terminal 2.
Theelement detection unit 142 detects an element estimated to be recognized by the user in a city or the like based on the composition corresponding to each of the plurality of images included in the moving image. Specifically, the element detection unit 142 identifies a composition corresponding to the image by analyzing each of the plurality of images included in the moving image. Then, the element detection unit 142 refers to the element specifying information illustrated in FIG. 6A or 6B and specifies an element associated with the specified composition, thereby detecting an element that is estimated to be recognized by the user.
要素検出部142は、動画に含まれる複数の画像のそれぞれに対応する構図に基づいて、都市等においてユーザが認識したと推測される要素を検出する。具体的には、要素検出部142は、動画に含まれる複数の画像のそれぞれを解析することにより、当該画像に対応する構図を特定する。そして、要素検出部142は、図6A又は図6Bに示す要素特定情報を参照し、特定した構図に関連付けられている要素を特定することにより、ユーザが認識したと推測される要素を検出する。 In the third embodiment, the
The
位置特定部143は、検出された要素の都市等における位置を特定する。具体的には、第3の実施形態において、動画の再生位置には、位置情報取得部221が取得した端末位置が関連付けられている。位置特定部143は、要素を検出したときの画像に基づいて、動画の再生位置を特定し、当該再生位置に関連付けられている端末位置を、検出された要素の位置として特定する。
作成部144及び算出部145の機能は、第1の実施形態と同じであるので説明を省略する。 Theposition specifying unit 143 specifies the position of the detected element in the city or the like. Specifically, in the third embodiment, the terminal position acquired by the position information acquisition unit 221 is associated with the playback position of the moving image. The position specifying unit 143 specifies the playback position of the moving image based on the image when the element is detected, and specifies the terminal position associated with the playback position as the position of the detected element.
The functions of thecreation unit 144 and the calculation unit 145 are the same as those in the first embodiment, and a description thereof will be omitted.
作成部144及び算出部145の機能は、第1の実施形態と同じであるので説明を省略する。 The
The functions of the
[第3の実施形態における効果]
以上、第3の実施形態によれば、認知度算出装置1は、動画のみに基づいて、ユーザが都市等において認識したと推測される要素を特定することができるので、ユーザの視線データを取得できない場合であっても、ユーザの都市等に対する認知度を算出することができる。 [Effect in the third embodiment]
As described above, according to the third embodiment, the recognition degree calculation device 1 can identify an element that is estimated to be recognized by a user in a city or the like based only on a moving image, and thus obtains user's line-of-sight data. Even if it is not possible, it is possible to calculate the user's degree of recognition of the city.
以上、第3の実施形態によれば、認知度算出装置1は、動画のみに基づいて、ユーザが都市等において認識したと推測される要素を特定することができるので、ユーザの視線データを取得できない場合であっても、ユーザの都市等に対する認知度を算出することができる。 [Effect in the third embodiment]
As described above, according to the third embodiment, the recognition degree calculation device 1 can identify an element that is estimated to be recognized by a user in a city or the like based only on a moving image, and thus obtains user's line-of-sight data. Even if it is not possible, it is possible to calculate the user's degree of recognition of the city.
以上、本発明を実施の形態を用いて説明したが、本発明の技術的範囲は上記実施の形態に記載の範囲には限定されない。上記実施の形態に、多様な変更又は改良を加えることが可能であることが当業者に明らかである。
As mentioned above, although this invention was demonstrated using embodiment, the technical scope of this invention is not limited to the range as described in the said embodiment. It will be apparent to those skilled in the art that various modifications or improvements can be added to the above embodiment.
例えば、上述の実施形態では、認知度算出装置1が、ウェアラブル端末2から、動画及び視線データを通信によって取得したが、これに限らない。例えば、ウェアラブル端末2から外した外部記憶媒体を認知度算出装置1に装着し、認知度算出装置1が、当該外部記憶媒体から動画及び視線データを取得してもよい。
For example, in the above-described embodiment, the degree-of-recognition calculation apparatus 1 acquires the moving image and the line-of-sight data from the wearable terminal 2 by communication, but is not limited thereto. For example, an external storage medium removed from the wearable terminal 2 may be attached to the recognition degree calculation device 1, and the recognition degree calculation device 1 may acquire moving images and line-of-sight data from the external storage medium.
また、上述の実施形態では、認知度算出装置1と、ウェアラブル端末2とはそれぞれ異なる装置であることとしたが、認知度算出装置1と、ウェアラブル端末2とは一体の装置であってもよい。例えば、ウェアラブル端末2が、認知度算出装置1が備える機能を有していたり、認知度算出装置1が、ウェアラブル端末2が備える機能を有していたりしてもよい。ここで、ウェアラブル端末2が、認知度算出装置1が備える機能を有している場合には、ウェアラブル端末2の入力部は、ボタンやセンサー端子等によって構成されてもよい。そして、当該ボタンやセンサー端子等によって、ウェアラブル端末2への視線入力、ウェアラブル端末2が撮影可能な範囲(ユーザの視界内)でのユーザの手等によるジェスチャー入力を受け付けてもよい。
In the above-described embodiment, the recognition level calculation device 1 and the wearable terminal 2 are different devices, but the recognition level calculation device 1 and the wearable terminal 2 may be integrated devices. . For example, the wearable terminal 2 may have a function included in the recognition degree calculation device 1, or the recognition degree calculation device 1 may have a function included in the wearable terminal 2. Here, when the wearable terminal 2 has a function included in the recognition degree calculation device 1, the input unit of the wearable terminal 2 may be configured by a button, a sensor terminal, or the like. Then, the line of sight input to the wearable terminal 2 or the gesture input by the user's hand or the like in the range where the wearable terminal 2 can be photographed (within the user's field of view) may be received by the button or the sensor terminal.
1・・・認知度算出装置、11・・・入力部、12・・・表示部、13・・・記憶部、14・・・制御部、141・・・取得部、142・・・要素検出部、143・・・位置特定部、144・・・作成部、145・・・算出部、2・・・ウェアラブル端末、21・・・記憶部、22・・・制御部、221・・・位置情報取得部、222・・・撮影部、223・・・視線検出部、224・・・送信部
DESCRIPTION OF SYMBOLS 1 ... Recognition degree calculation apparatus, 11 ... Input part, 12 ... Display part, 13 ... Memory | storage part, 14 ... Control part, 141 ... Acquisition part, 142 ... Element detection Part, 143 ... position specifying part, 144 ... creation part, 145 ... calculation part, 2 ... wearable terminal, 21 ... storage part, 22 ... control part, 221 ... position Information acquisition unit, 222 ... photographing unit, 223 ... gaze detection unit, 224 ... transmission unit
Claims (13)
- 都市又は建築物の空間経路を示す動画と、前記動画に対応するユーザの視線の動きのパターンとを取得する取得部と、
前記動画と、前記視線の動きのパターンとに基づいて、前記都市又は前記建築物において前記ユーザが認識したと推測される要素を検出する要素検出部と、
検出された前記要素の前記都市又は前記建築物における位置を特定する位置特定部と、
検出された前記要素と、特定された前記要素の位置とに基づいて、前記ユーザが前記都市又は前記建築物に対して思い描く地図を示すメンタルモデルを作成する作成部と、
前記メンタルモデルと、前記空間経路に対応する実際の前記要素及び前記要素の位置を示す基準情報とに基づいて、前記都市又は前記建築物の前記ユーザに対する分かり易さを示す、前記ユーザの前記都市又は前記建築物に対する認知度を算出する算出部と、
を備える認知度算出装置。 An acquisition unit that acquires a moving image showing a spatial route of a city or a building and a movement pattern of a user's line of sight corresponding to the moving image;
An element detection unit that detects an element that is estimated to be recognized by the user in the city or the building based on the moving image and the movement pattern of the line of sight;
A position specifying unit for specifying a position of the detected element in the city or the building;
A creation unit that creates a mental model indicating a map that the user envisions for the city or the building based on the detected element and the position of the identified element;
The city of the user indicating the ease of understanding of the city or the building to the user based on the mental model and the reference information indicating the actual element corresponding to the spatial path and the position of the element Or a calculation unit for calculating the degree of recognition for the building;
A recognition degree calculation device comprising: - 前記算出部は、前記メンタルモデルに含まれる前記要素及び前記要素の位置と、前記基準情報に含まれる前記要素及び前記要素の位置との一致度に基づいて、前記認知度を算出する、
請求項1に記載の認知度算出装置。 The calculation unit calculates the degree of recognition based on the degree of coincidence between the element and the position of the element included in the mental model and the element and the position of the element included in the reference information.
The recognition degree calculation apparatus according to claim 1. - 前記算出部は、前記要素検出部が検出した前記要素の数に基づいて、前記都市又は建築物の複雑さを示す複雑度を算出する、
請求項1又は2に記載の認知度算出装置。 The calculation unit calculates a complexity indicating the complexity of the city or the building based on the number of the elements detected by the element detection unit.
The recognition degree calculation apparatus according to claim 1 or 2. - 前記要素検出部は、前記動画に含まれる画像に対応する構図と、前記視線の動きのパターンとに基づいて、前記要素を検出する、
請求項1から3のいずれか1項に記載の認知度算出装置。 The element detection unit detects the element based on a composition corresponding to an image included in the moving image and a movement pattern of the line of sight;
The degree-of-recognition calculation apparatus according to any one of claims 1 to 3. - 前記位置特定部は、前記動画に含まれる画像に関連付けられている前記画像の撮影位置に基づいて、前記要素の位置を特定する、
請求項1から4のいずれか1項に記載の認知度算出装置。 The position specifying unit specifies a position of the element based on a shooting position of the image associated with an image included in the moving image;
The degree-of-recognition calculation apparatus according to any one of claims 1 to 4. - 前記位置特定部は、前記動画に含まれる、連続する複数の画像に基づいて、前記要素を検出した画像に対応する位置を前記要素の位置として特定する、
請求項1から4のいずれか1項に記載の認知度算出装置。 The position specifying unit specifies a position corresponding to an image in which the element is detected as a position of the element based on a plurality of continuous images included in the moving image.
The degree-of-recognition calculation apparatus according to any one of claims 1 to 4. - 前記取得部は、前記動画として、前記都市又は建築物を示すコンピュータ・グラフィック動画を取得する、
請求項1から6のいずれか1項に記載の認知度算出装置。 The acquisition unit acquires a computer graphic video indicating the city or a building as the video.
The degree-of-recognition calculation apparatus according to any one of claims 1 to 6. - 前記取得部は、前記動画として、前記ユーザに装着したウェアラブル端末によって撮影した動画を取得する、
請求項1から5のいずれか1項に記載の認知度算出装置。 The acquisition unit acquires, as the moving image, a moving image shot by a wearable terminal attached to the user.
The degree-of-recognition calculation apparatus according to any one of claims 1 to 5. - 前記要素検出部は、前記ウェアラブル端末によって撮影した動画に含まれる画像の構図のパターンを特定し、当該画像の構図のパターンにさらに基づいて前記要素を検出する、
請求項8に記載の認知度算出装置。 The element detection unit identifies a composition pattern of an image included in a moving image captured by the wearable terminal, and further detects the element based on the composition pattern of the image;
The recognition degree calculation device according to claim 8. - 前記取得部は、前記ユーザの視線を検出する視線検出装置が検出した前記ユーザの視線の動きのパターンを取得する、
請求項1から7のいずれか1項に記載の認知度算出装置。 The acquisition unit acquires a movement pattern of the user's line of sight detected by a line-of-sight detection device that detects the line of sight of the user.
The recognition degree calculation apparatus according to any one of claims 1 to 7. - ユーザに装着したウェアラブル端末によって撮影され、都市又は建築物の空間経路を示す動画を取得する取得部と、
前記動画に含まれる複数の画像のそれぞれに対応する構図に基づいて、前記都市又は前記建築物において前記ユーザが認識したと推測される要素を検出する要素検出部と、
検出された前記要素の前記都市又は前記建築物における位置を特定する位置特定部と、
検出された前記要素と、特定された前記要素の位置に基づいて、前記ユーザが前記都市又は前記建築物に対して思い描く地図を示すメンタルモデルを作成する作成部と、
前記メンタルモデルと、前記空間経路に対応する領域における実際の前記要素及び前記要素の位置を示す基準情報とに基づいて、前記都市又は前記建築物の前記ユーザに対する分かり易さを示す、前記ユーザの前記都市又は前記建築物に対する認知度を算出する算出部と、
を備える認知度算出装置。 An acquisition unit that acquires a moving image that is captured by a wearable terminal attached to a user and that indicates a spatial route of a city or a building;
An element detection unit that detects an element that is estimated to be recognized by the user in the city or the building based on a composition corresponding to each of a plurality of images included in the video;
A position specifying unit for specifying a position of the detected element in the city or the building;
Based on the detected element and the identified position of the element, a creation unit that creates a mental model indicating a map that the user envisions for the city or the building;
Based on the mental model and the reference information indicating the actual position of the element and the position of the element in the area corresponding to the spatial path, the user or the user can easily understand the city or the building. A calculation unit for calculating the degree of recognition for the city or the building;
A recognition degree calculation device comprising: - コンピュータにより実行される、
都市又は建築物の空間経路を示す動画と、前記動画に対するユーザの視線の動きのパターンとを取得するステップと、
前記動画と、前記視線の動きのパターンとに基づいて、前記都市又は前記建築物において前記ユーザが認識したと推測される要素を検出するステップと、
検出された前記要素の前記都市又は前記建築物における位置を検出するステップと、
検出された前記要素と、前記要素の位置とに基づいて、前記ユーザが前記都市又は前記建築物に対して思い描く地図を示すメンタルモデルを作成するステップと、
前記メンタルモデルと、前記空間経路に対応する実際の前記要素及び前記要素の位置を示す基準情報とに基づいて、前記都市又は前記建築物の前記ユーザに対する分かり易さを示す、前記ユーザの前記都市又は前記建築物に対する認知度を算出するステップと、
を備える認知度算出方法。 Executed by the computer,
Acquiring a moving image showing a spatial path of a city or a building, and a movement pattern of a user's line of sight with respect to the moving image;
Detecting an element presumed to be recognized by the user in the city or the building based on the moving image and the movement pattern of the line of sight;
Detecting the position of the detected element in the city or the building;
Creating a mental model showing a map envisioned by the user for the city or the building based on the detected element and the position of the element;
The city of the user indicating the ease of understanding of the city or the building to the user based on the mental model and the reference information indicating the actual element corresponding to the spatial path and the position of the element Or calculating the degree of recognition for the building;
A recognition calculation method comprising: - コンピュータを、
都市又は建築物の空間経路を示す動画と、前記動画に対応するユーザの視線の動きのパターンとを取得する取得部、
前記動画と、前記視線の動きのパターンとに基づいて、前記都市又は前記建築物において前記ユーザが認識したと推測される要素を検出する要素検出部、
検出された前記要素の前記都市又は前記建築物における位置を特定する位置特定部、
検出された前記要素と、特定された前記要素の位置とに基づいて、前記ユーザが前記都市又は前記建築物に対して思い描く地図を示すメンタルモデルを作成する作成部、及び
前記メンタルモデルと、前記空間経路に対応する実際の前記要素及び前記要素の位置を示す基準情報とに基づいて、前記都市又は前記建築物の前記ユーザに対する分かり易さを示す、前記ユーザの前記都市又は前記建築物に対する認知度を算出する算出部、
として機能させる認知度算出プログラム。
Computer
An acquisition unit that acquires a moving image showing a spatial route of a city or a building, and a movement pattern of a user's line of sight corresponding to the moving image,
An element detection unit that detects an element that is estimated to be recognized by the user in the city or the building based on the moving image and the movement pattern of the line of sight,
A position specifying unit for specifying the position of the detected element in the city or the building;
Based on the detected element and the position of the identified element, a creation unit that creates a mental model indicating a map that the user envisions for the city or the building, and the mental model, The user's perception of the city or the building, which indicates the ease of understanding of the city or the building to the user based on the actual element corresponding to the spatial path and the reference information indicating the position of the element A calculation unit for calculating the degree,
Recognition calculation program to function as.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2015/064968 WO2016189633A1 (en) | 2015-05-25 | 2015-05-25 | Degree of awareness computation device, degree of awareness computation method, and degree of awareness computation program |
JP2017520105A JP6487545B2 (en) | 2015-05-25 | 2015-05-25 | Recognition calculation device, recognition calculation method, and recognition calculation program |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2015/064968 WO2016189633A1 (en) | 2015-05-25 | 2015-05-25 | Degree of awareness computation device, degree of awareness computation method, and degree of awareness computation program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016189633A1 true WO2016189633A1 (en) | 2016-12-01 |
Family
ID=57393030
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/064968 WO2016189633A1 (en) | 2015-05-25 | 2015-05-25 | Degree of awareness computation device, degree of awareness computation method, and degree of awareness computation program |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP6487545B2 (en) |
WO (1) | WO2016189633A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110070810A (en) * | 2019-05-27 | 2019-07-30 | 珠海幸福家网络科技股份有限公司 | A kind of building explanation method and building introduction system |
WO2023228931A1 (en) * | 2022-05-26 | 2023-11-30 | 株式会社ジオクリエイツ | Information processing system, information processing device, information processing method, and program |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007271480A (en) * | 2006-03-31 | 2007-10-18 | Denso It Laboratory Inc | Information providing device |
JP2008225465A (en) * | 2007-02-16 | 2008-09-25 | Nagoya Institute Of Technology | Digital map making system |
JP2013179403A (en) * | 2012-02-28 | 2013-09-09 | Sony Corp | Information processing device, information processing method, and program |
JP2014192861A (en) * | 2013-03-28 | 2014-10-06 | Osaka Gas Co Ltd | Monitoring support system |
JP2015045940A (en) * | 2013-08-27 | 2015-03-12 | 株式会社ジオクリエイツ | Emotion extraction method, emotion extraction program, emotion extraction device and building design method |
-
2015
- 2015-05-25 WO PCT/JP2015/064968 patent/WO2016189633A1/en active Application Filing
- 2015-05-25 JP JP2017520105A patent/JP6487545B2/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007271480A (en) * | 2006-03-31 | 2007-10-18 | Denso It Laboratory Inc | Information providing device |
JP2008225465A (en) * | 2007-02-16 | 2008-09-25 | Nagoya Institute Of Technology | Digital map making system |
JP2013179403A (en) * | 2012-02-28 | 2013-09-09 | Sony Corp | Information processing device, information processing method, and program |
JP2014192861A (en) * | 2013-03-28 | 2014-10-06 | Osaka Gas Co Ltd | Monitoring support system |
JP2015045940A (en) * | 2013-08-27 | 2015-03-12 | 株式会社ジオクリエイツ | Emotion extraction method, emotion extraction program, emotion extraction device and building design method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110070810A (en) * | 2019-05-27 | 2019-07-30 | 珠海幸福家网络科技股份有限公司 | A kind of building explanation method and building introduction system |
WO2023228931A1 (en) * | 2022-05-26 | 2023-11-30 | 株式会社ジオクリエイツ | Information processing system, information processing device, information processing method, and program |
Also Published As
Publication number | Publication date |
---|---|
JPWO2016189633A1 (en) | 2018-04-19 |
JP6487545B2 (en) | 2019-03-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11315308B2 (en) | Method for representing virtual information in a real environment | |
US9558581B2 (en) | Method for representing virtual information in a real environment | |
WO2015182227A1 (en) | Information processing device and information processing method | |
EP3596588B1 (en) | Gradual transitioning between two-dimensional and three-dimensional augmented reality images | |
WO2019037489A1 (en) | Map display method, apparatus, storage medium and terminal | |
CN112666714A (en) | Gaze direction mapping | |
CN110487262A (en) | Indoor orientation method and system based on augmented reality equipment | |
EP3274964B1 (en) | Automatic connection of images using visual features | |
CN110168615B (en) | Information processing apparatus, information processing method, and storage medium | |
CN112105892B (en) | Method and system for identifying map features using motion data and face metadata | |
US10838515B1 (en) | Tracking using controller cameras | |
CN113570664B (en) | Augmented reality navigation display method and device, electronic equipment and computer medium | |
CN108629799B (en) | Method and equipment for realizing augmented reality | |
JP2016122392A (en) | Information processing apparatus, information processing system, control method and program of the same | |
KR20130137076A (en) | Device and method for providing 3d map representing positon of interest in real time | |
US20210327160A1 (en) | Authoring device, authoring method, and storage medium storing authoring program | |
CN102799378B (en) | A kind of three-dimensional collision detection object pickup method and device | |
JP6487545B2 (en) | Recognition calculation device, recognition calculation method, and recognition calculation program | |
CN113345107A (en) | Augmented reality data display method and device, electronic equipment and storage medium | |
JP6849895B2 (en) | Phase identification device, mobile terminal, phase identification method and program | |
CN111506280B (en) | Graphical user interface for indicating off-screen points of interest | |
JP6473872B2 (en) | Video construction device, pseudo-visual experience system, and video construction program | |
JP2024060347A (en) | Information processing device | |
JP2021033377A (en) | Information terminal device, program, presentation system and server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15893268 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017520105 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15893268 Country of ref document: EP Kind code of ref document: A1 |