WO2014122834A1 - シミュレーションシステム、シミュレーション装置および商品説明補助方法 - Google Patents
シミュレーションシステム、シミュレーション装置および商品説明補助方法 Download PDFInfo
- Publication number
- WO2014122834A1 WO2014122834A1 PCT/JP2013/080632 JP2013080632W WO2014122834A1 WO 2014122834 A1 WO2014122834 A1 WO 2014122834A1 JP 2013080632 W JP2013080632 W JP 2013080632W WO 2014122834 A1 WO2014122834 A1 WO 2014122834A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- visual field
- lens
- simulation
- spectacle
- Prior art date
Links
- 238000004088 simulation Methods 0.000 title claims abstract description 253
- 238000000034 method Methods 0.000 title claims description 52
- 230000000007 visual effect Effects 0.000 claims abstract description 289
- 238000013461 design Methods 0.000 claims description 85
- 238000012545 processing Methods 0.000 claims description 60
- 238000004891 communication Methods 0.000 claims description 53
- 230000008569 process Effects 0.000 claims description 26
- 210000003128 head Anatomy 0.000 claims description 5
- 239000011521 glass Substances 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 17
- 230000000750 progressive effect Effects 0.000 description 12
- 201000009310 astigmatism Diseases 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 208000001491 myopia Diseases 0.000 description 5
- 208000013057 hereditary mucoepithelial dysplasia Diseases 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000009467 reduction Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012905 input function Methods 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000010287 polarization Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0025—Operational features thereof characterised by electronic signal processing, e.g. eye models
-
- G—PHYSICS
- G02—OPTICS
- G02C—SPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
- G02C7/00—Optical parts
- G02C7/02—Lenses; Lens systems ; Methods of designing lenses
- G02C7/024—Methods of designing ophthalmic lenses
- G02C7/028—Special mathematical design techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0621—Item configuration or customization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/18—Image warping, e.g. rearranging pixels individually
Definitions
- the present invention relates to a simulation system, a simulation apparatus, and a product explanation assistance method that allow a person who intends to wear spectacle lenses to see through a spectacle lens that he / she intends to wear through, in a simulated manner.
- a simulation apparatus that allows a person who intends to wear spectacle lenses to experience a state of wearing spectacle lenses is used (for example, see Patent Document 1).
- a spectacle lens wearer can experience the appearance (image distortion, blur, etc.) when passing through the spectacle lens by viewing the simulation image prior to ordering the lens.
- the spectacle shop side uses image formation by simulation, it is not necessary to prepare a sample lens such as a lens prescription desired by a person who intends to wear spectacle lenses, and a lens of a kind not included in the sample lens is worn. It is also possible to make the eyeglass lens wearer feel the way the case looks.
- the lens design characteristics generally differ depending on the applied lens design criteria.
- the characteristics of lens visual characteristics differ depending on the applied lens design criteria.
- the characteristics of the lens visual characteristics are different, so that the appearance through the spectacle lens is also different.
- the way of viewing through the spectacle lens is different between the regions such as the far vision region, the near vision region, and the intermediate vision region. Therefore, when using the simulation device, notify the planned spectacle lens wearer who simulates the lens wearing state of the difference in the characteristics of the visual characteristics of the spectacle lens, Determining whether or not the result of the simulated experience is appropriate is very important in order to give satisfaction to those who intend to wear spectacle lenses.
- the conventional simulation device can make the spectacle lens wearer simulate the appearance through the spectacle lens
- the spectacle lens characteristic difference between the spectacle lenses is not necessarily shown to the spectacle lens wearer. It cannot be said that it can be understood enough.
- the lens visual characteristics of each spectacle lens have hundreds of design types depending on the combination. Therefore, unless you are a store clerk who fully understands the difference in characteristics among hundreds of design types, the spectacle lens selected by the spectacle lens wearer will have what characteristics of visual characteristics of the lens. There is a risk that it cannot be accurately notified to the person who intends to wear the spectacle lens.
- some conventional simulation apparatuses are configured to display a simulation image on a display screen of an HMD (head-mounted display) when a person who intends to wear spectacle lenses visually recognizes the simulation image (for example, a patent) Reference 1).
- HMD head-mounted display
- a person who intends to wear spectacle lenses visually recognizes the simulation image (for example, a patent) Reference 1).
- HMD head-mounted display
- the person who intends to wear spectacle lenses will be burdened.
- There are also small, lightweight and inexpensive HMDs but such HMDs limit the field of view and the resolution is not sufficient for simulation. Therefore, in the conventional simulation apparatus, not only the person who intends to wear spectacle lenses but also the spectacle store side may feel dissatisfied.
- the present invention provides a simulation system, a simulation apparatus, and a product capable of eliminating the above-mentioned dissatisfaction by a spectacle lens wearer, a spectacle store side, etc.
- the purpose is to provide an explanation assistance method.
- the present invention has been devised to achieve the above-described object.
- the present inventor first examined the characteristics of the lens visual characteristics of the spectacle lens.
- the characteristics of the lens visual characteristics are different depending on the design type of the spectacle lens, for example, in the case of a progressive power lens having an individually designed free-form surface, and each region such as a far vision region, a near vision region, an intermediate vision region, etc. Also different. For this reason, it is considered very difficult for anyone to grasp all the differences in the characteristics of hundreds of types by combination. Based on this fact, the inventor of the present application has further studied earnestly.
- the inventor of the present application prepares commentary information on the characteristics of the lens visual characteristics that are different for each lens design type and for each region in advance for each design type and for each region, It came to the idea that if the explanation information corresponding to the display contents is output according to the display, the characteristics of the lens visual characteristics can be grasped accurately and easily.
- the simulation image display is not performed for the entire visual field region of the spectacle lens, but the entire visual field region is divided into a plurality of partial visual field regions and selectively performed for each of the partial visual field regions.
- the corresponding explanatory information can be easily output, and the entire visual field region of the spectacle lens can be clearly seen regardless of the size and resolution of the visual field on the image display side.
- the present invention has been made based on the above-described new idea by the present inventors.
- a terminal device used in an eyeglass store a display device visually recognized by an eyeglass lens wearer who has visited the eyeglass store, and a server device having a function as a computer can communicate with each other.
- the server device is configured to determine a plurality of partial visual field regions constituting the entire visual field region of the spectacle lens based on lens design data of the spectacle lens that the spectacle lens wearer plans to wear.
- Image processing for reflecting the lens visual characteristics of the spectacle lens on the original image of the image, and generating an image for each of the plurality of partial visual field regions; and Explanation information about the characteristics of the visual characteristics of the lens is applied to the lens design data in association with the partial visual field area.
- the terminal device includes an information output unit that acquires and outputs the commentary information corresponding to the partial visual field region displayed on the display screen unit of the display device from the information storage unit. It is a simulation system.
- the image generation unit of the server device performs a process of superimposing the contour lines of the clear index of the spectacle lens on the simulation image.
- the display screen portion of the display device displays the simulation image on which the contour lines are superimposed.
- the image generation unit of the server device reflects the frame image of the spectacle frame holding the spectacle lens in the simulation image.
- the display screen unit of the display device displays the simulation image in which the frame image is reflected.
- the display device is a head-mounted display attached to a head of a person who intends to wear the spectacle lens,
- the display screen unit individually displays an image for each of the left and right eyes of the person who intends to wear the spectacle lens.
- the terminal device is a portable information terminal used by a store clerk of the spectacle store, and the information output unit Is characterized in that the commentary information is displayed and output to the store clerk.
- the information output unit outputs the commentary information by voice.
- a seventh aspect of the present invention is the invention according to any one of the first to sixth aspects, wherein at least one of the display device and the terminal device selects a partial visual field region to be displayed on the display device. An operation unit for performing the above is provided.
- the terminal device inputs parameter information about a spectacle lens that the spectacle lens wearer plans to wear.
- An information input unit, and the server device specifies a type of lens design standard to be applied to the spectacle lens that the planned spectacle lens wearer plans to wear based on the parameter information input by the information input unit.
- a data generation unit that generates lens design data of the spectacle lens while applying the specified type of lens design standard.
- a ninth aspect of the present invention is a simulation device configured such that a terminal device used in a spectacle store and a display device visually recognized by a spectacle lens wearer who has visited the spectacle store are connected to be able to communicate with each other.
- the display device is a simulation image obtained by performing image processing that reflects the lens visual characteristics of the spectacle lens with respect to the original image for a plurality of partial visual field areas constituting the entire visual field area of the spectacle lens, A display screen portion selectively displayed for each of the plurality of partial visual field regions to be visually recognized by a person who intends to wear the spectacle lens; and the terminal device reflects the partial visual field region displayed by the display screen portion of the display device
- a simulation apparatus comprising an information output unit that outputs commentary information related to the characteristic of the lens visual characteristic.
- at least one of the terminal device or the display device is connected to a communication line network and communicates with a server device on the communication line network.
- An eleventh aspect of the present invention is the information storage unit according to the ninth aspect, wherein at least one of the terminal device or the display device stores the simulation image and the commentary information for each partial visual field region. It is characterized by providing.
- the terminal device used at the spectacles store and the display device visually recognized by a person who intends to wear the spectacle lenses visiting the spectacles store are used.
- a product description assisting method for assisting the product description wherein image processing for reflecting the lens visual characteristics of the spectacle lens is performed on an original image of a plurality of partial visual field regions constituting the entire visual field region of the spectacle lens.
- the obtained simulation image is selectively displayed on the display device for each of the plurality of partial visual field regions, and is reflected on the partial visual field region displayed on the display device, and an image display step for the eyeglass lens wearer to visually recognize the simulation image.
- the information output step acquired from the storage unit and output by the terminal device and the selection of the partial visual field area to be displayed by the display device are switched, and the explanation information output by the terminal device is switched correspondingly.
- a product description assisting method comprising: a selection switching step.
- the person who plans to wear the spectacle lens can sufficiently grasp the characteristics of the lens visual characteristics of the spectacle lens, In addition, the entire visual field of the spectacle lens can be clearly seen.
- FIG. 1 is a schematic diagram showing a schematic configuration example of an entire simulation system in a first embodiment of the present invention.
- FIG. 2 is a block diagram showing a functional configuration example of a simulation system in the first embodiment of the present invention.
- 3 is a flowchart showing an outline of simulation processing in the first embodiment of the present invention.
- 5 is a flowchart showing details of a characteristic procedure of simulation processing in the first embodiment of the present invention.
- FIG. 5 is a conceptual diagram showing a specific example of display output contents on the tablet terminal in the first embodiment of the present invention.
- FIG. 5 is an explanatory diagram showing a specific example of an original image that is a basis of simulation image generation in an image handled in the simulation processing according to the first embodiment of the present invention.
- FIG. 5 is an explanatory diagram showing a specific example of a simulation image obtained by performing image processing on an original image, which is an image handled in simulation processing in the first embodiment of the present invention.
- FIG. 6 is an explanatory diagram showing a specific example of an image representing contour lines of a clearness index of a spectacle lens in an image handled in a simulation process in the first embodiment of the present invention.
- FIG. 6 is an explanatory diagram illustrating a specific example of a simulation image for each partial visual field region on which images of contour lines of a clear index are superimposed in an image handled in the simulation processing according to the first embodiment of the present invention. It is a flowchart which shows the outline
- FIG. 1 is a schematic diagram illustrating a schematic configuration example of the entire simulation system according to the present embodiment.
- the simulation system allows the spectacle lens wearer P1 who has visited the spectacle store S to experience a pseudo-experience through the spectacle lens that the spectacle lens wearer P1 plans to wear.
- the simulation system has a function as a terminal device 1 used in the spectacle store S, a display device 2 having a display screen portion visually recognized by the spectacle lens wearer P1 visiting the spectacle store S, and a computer.
- the server device 3 is configured to be communicably connected via a communication network 4 such as the Internet.
- a communication network 4 such as the Internet.
- a communication network 4 such as the Internet
- the terminal device 1 for example, a portable information terminal (hereinafter referred to as “tablet terminal”) used by a store clerk P2 of the spectacle store S is used.
- the terminal device 1 is a tablet terminal
- the display device 2 for example, a head mounted display (hereinafter referred to as “HMD”) attached to the head of the spectacle lens wearer P ⁇ b> 1 is used.
- the display device 2 is an HMD as an example.
- the tablet terminal 1 and the HMD 2 are used, and these constitute a simulation apparatus described later.
- FIG. 2 is a block diagram illustrating a functional configuration example of the simulation system according to the present embodiment.
- the simulation system in the present embodiment is roughly configured to include a server device 3 and a simulation device 5. Note that the server device 3 and the simulation device 5 are communicably connected via the communication network 4 as described above.
- the server device 3 generates a simulation image reflecting the lens visual characteristics of the spectacle lens and simulates the generated simulation image to the simulation device 5 so that the spectacle lens wearer P1 can experience the appearance through the spectacle lens. And other necessary processing.
- the server device 3 includes a communication interface (hereinafter, interface is abbreviated as “I / F”) unit 31, an acquired information recognition unit 32, a lens design data generation unit 33, an original image storage unit 34, and an image generation unit 35.
- the comment information storage unit 36 and the control unit 37 are provided with functions.
- the communication I / F unit 31 realizes a function for communicating with the simulation device 5 on the spectacle store S side via the communication line network 4.
- the acquired information recognition unit 32 realizes a function of recognizing information acquired from the spectacle store S through the communication I / F unit 31. It is assumed that the information from the spectacle store S side includes parameter information about spectacle lenses that the spectacle lens wearer P1 plans to wear.
- the parameter information is a parameter derived from prescription information of the spectacle lens scheduled to be worn by the spectacle lens wearer P1, the shape information of the spectacle frame holding the spectacle lens, the living environment information assumed by the spectacle lens wearer P1, and the like. Information about.
- the lens design data generation unit 33 Based on the parameter information recognized by the acquisition information recognition unit 32, the lens design data generation unit 33 identifies the type of lens design standard to be applied to the spectacle lens that the spectacle lens wearer P1 plans to wear, and identifies A function of generating lens design data of the spectacle lens while applying various lens design standards is realized. There are various types of lens design standards to be applied. Therefore, there are hundreds of types of design types of lens design data. The details of the lens design standard and the lens design data generation are based on known technology (see, for example, International Publication No. 2009/133877), and thus the description thereof is omitted here.
- the original image storage unit 34 realizes a function of storing and holding original image data 34 a necessary for generating a simulation image in the image generation unit 35.
- Examples of the original image data 34a stored and held by the original image storage unit 34 include image data of a three-dimensional CG (computer graphics) image corresponding to the original image of the simulation image.
- the original image is not necessarily a CG image, and may be an image captured by an imaging camera, for example.
- the original image storage unit 34 is assumed to store and hold original image data as original image data 34a for a plurality of partial visual field regions constituting the entire visual field region of the spectacle lens.
- the original image storage unit 34 may store and hold other image data.
- the “entire field of view” of the spectacle lens refers to a region corresponding to the entire field of view when viewed through the spectacle lens.
- “Full field of view” refers to a range of a viewing angle visible through a spectacle lens, for example, a range of about 90 ° in the horizontal direction and about 70 ° in the vertical direction.
- a plurality of “partial field areas” constituting the entire field area refers to respective areas when the entire field area is divided according to a preset division mode.
- the division into the partial visual field regions is performed in consideration of the difference in the characteristics of the lens visual characteristics of the spectacle lens.
- the entire visual field region is divided into nine regions, and the right side portion, the central portion, the left portion, the right portion of the near vision region, the center of the far vision region Examples include a portion, a left portion, a right portion of the intermediate vision region, a central portion, and a left portion that belong to different regions.
- the plurality of partial visual field regions only have to correspond to a part of the total visual field region, and each partial visual field region may have an image portion overlapping each other.
- the original image storage unit 34 does not collectively output images for all the visual field regions of the spectacle lens, Thus, it is possible to output an image for each partial visual field obtained by dividing the visual field into a plurality of small visual fields.
- the image generation unit 35 generates a simulation image reflecting the lens visual characteristics of the spectacle lens scheduled to be worn by the spectacle lens wearer P1.
- the image generation unit 35 has functions as an image processing unit 35a and an image superimposing unit 35b.
- the image processing unit 35 a Based on the lens design data generated by the lens design data generation unit 33, the image processing unit 35 a performs the visual characteristics (by the lens design data) on the original image data 34 a stored and held in the original image storage unit 34. This realizes a function of performing image processing to reflect blur, distortion, and the like.
- a simulation image reflecting the lens visual characteristics of the spectacle lens scheduled to be worn by the spectacle lens wearer P1 is generated with respect to the original image for each partial visual field region.
- the image superimposing unit 35b Based on the lens design data generated by the lens design data generation unit 33, the image superimposing unit 35b obtains a clear index of the spectacle lens scheduled to be worn by the spectacle lens wearer P1 and also displays an image representing contour lines of the clear index. The function of generating and superimposing the contour image on the simulation image obtained by the image processing in the image processing unit 35a is realized.
- the “clarity index” here is one of indexes for evaluating the performance of a spectacle lens (particularly a progressive power lens). However, since the details of the clear index are based on known technology (for example, refer to Japanese Patent No. 3919097), the description thereof is omitted here.
- the comment information storage unit 36 realizes a function of storing and storing comment information 36a for explaining the characteristics of the lens visual characteristics of the spectacle lens.
- the comment information storage unit 36 stores and holds the comment information 36a for each type of lens design standard applied to the lens design data because the lens visual characteristics differ for each design type of the spectacle lens.
- the comment information 36a regarding the characteristics of the lens visual characteristics in each partial visual field area is used as the partial visual field area.
- the control unit 37 realizes a function of performing operation control of the entire server device 3. Accordingly, the operations of the above-described units 31 to 36 are controlled by the control unit 37.
- the functions of these units 31 to 37 are realized by the server device 3 executing a predetermined software program while using the hardware resources of the server device 3 as a computer.
- the software program is installed in the server device 3 and used.
- the software program is not necessarily limited to this. If the server device 3 can be accessed, other software programs on the communication network 4 are used. It may be present in the device.
- the simulation device 5 is used on the side of the spectacle store S in order for the spectacle lens wearer P1 to experience the appearance through the spectacle lens, and specifically, the HMD 2 and the tablet terminal 1 are used. It is constituted by.
- the HMD 2 displays and outputs a simulation image while wearing the head of the spectacle lens wearer P1 who has visited the spectacle store S, so that the spectacle lens wearer P1 can be seen through the spectacle lens. This is a simulated experience. Therefore, the HMD 2 is configured to have functions as the communication I / F unit 21 and the display screen unit 22.
- the communication I / F unit 21 realizes a function for communicating with the tablet terminal 1 via a wireless or wired communication line (not shown). However, the communication I / F unit 21 may also have a function of performing communication with the server device 3 via the communication line network 4.
- the display screen unit 22 realizes a function for displaying a simulation image generated by the server device 3 and causing the spectacle lens wearer P1 to visually recognize the simulation image.
- the display screen unit 22 selectively displays a simulation image for each partial visual field area that is an image processing target by the image processing unit 35a of the image generation unit 35, and by the image superimposing unit 35b of the image generation unit 35. This is performed in a state in which the contour image of the clearness index of the spectacle lens is superimposed.
- the display screen part 22 is a function in HMD2, the image display according to each partial visual field area
- the displayable image size does not necessarily correspond to the entire visual field region.
- the display screen 22 has a diagonal direction of about 50 ° in the diagonal direction. It may correspond to a viewing angle of a certain degree.
- the tablet terminal 1 is carried and operated by a store clerk P2 of the spectacle store S, and inputs and outputs information necessary for causing the spectacle lens wearer P1 to experience the way of viewing through the spectacle lens in a simulated manner. It is.
- the tablet terminal 1 is configured to have functions as the communication I / F unit 11 and the touch panel unit 12.
- the communication I / F unit 11 implements a function for communicating with the server device 3 via the communication line network 4 and communicating with the HMD 2 via a wireless or wired communication line (not shown). It is.
- the touch panel unit 12 is used for inputting and outputting information, and more specifically, functions as an information output unit 12a, an operation unit 12b, and an information input unit 12c are realized.
- the information output unit 12a realizes a function of displaying and outputting various information to the store clerk P2 while using the information output function of the touch panel unit 12.
- the various information displayed and output by the information output unit 12a includes comment information 36a stored and held by the comment information storage unit 36 of the server device 3. That is, the information output unit 12a has a function of acquiring the comment information 36a in the comment information storage unit 36 from the comment information storage unit 36 and displaying and outputting it to the clerk P2.
- the information output unit 12a is configured to display and output the explanation information 36a corresponding to the partial visual field area displayed on the display screen unit 22 of the HMD2.
- the operation unit 12b realizes a function of performing a selection operation of a partial visual field area to be displayed on the display screen unit 22 of the HMD 2 using the information input function of the touch panel unit 12.
- the information input unit 12c realizes a function of inputting parameter information regarding the spectacle lens that the spectacle lens wearer P1 plans to wear using the information input function of the touch panel unit 12.
- FIG. 3 is a flowchart showing an outline of simulation processing in the present embodiment.
- Step 101 step is hereinafter abbreviated as “S”), and transmitted from the communication I / F unit 11 of the tablet terminal 1 to the server device 3 via the communication line network 4.
- the server device 3 When the parameter information is transmitted, the server device 3 receives the parameter information by the communication I / F unit 31 and recognizes it by the acquired information recognition unit 32, and the lens design data generation unit 33 based on the recognition result. Specifies the type of lens design standard to be applied and the spectacle lens wearer P1 applies the specified type of lens design standard to the spectacle lens (that is, the spectacle lens according to the determined prescription). Lens design data is generated (S102).
- the lens design data generating unit 33 When the lens design data generating unit 33 generates the lens design data, in the server device 3, the image generating unit 35 generates a simulation image reflecting the lens visual characteristic specified by the lens design data (S103). Then, the image data about the generated simulation image is transmitted from the communication I / F unit 31 to the spectacle store S side via the communication line network 4.
- the image data from the server device 3 is received by the communication I / F unit 11 of the tablet terminal 1 and is managed by the tablet terminal 1 while the communication I / F unit of the tablet terminal 1 is used.
- 11 is transmitted to the HMD 2 and is received by the communication I / F unit 21 of the HMD 2.
- the HMD 2 displays the simulation image generated by the server device 3 under the management of the tablet terminal 1 so that the scheduled spectacle lens wearer P1 can visually recognize the spectacle lens wear planner P1.
- a simulated experience of the lens wearing state is made (S104).
- the spectacle lens wearer P1 determines that the result of the simulated experience of the lens wearing state is OK without feeling uncomfortable in the appearance of the simulation image (S105)
- the spectacle store S Then, for the spectacle lens wearer P1, a lens order based on the determined prescription is made (S106).
- the spectacle lens wearer P1 determines that the result of the simulated experience in the lens wearing state is NG (S105)
- the prescription of the spectacle lens is changed and the simulated experience result becomes OK.
- the above-described series of procedures is repeated again (S101 to S105).
- a simulation process is performed for causing the spectacle lens wearer P1 to experience a simulated lens wearing state.
- FIG. 4 is a flowchart showing details of a characteristic procedure of the simulation processing in the present embodiment.
- the lens design data generated by the lens design data generating unit 33 and the lens design standard applied by the lens design data generating unit 33 at the time of generating the data are displayed.
- the image generation unit 35 acquires identification information about the type of the lens design data from the lens design data generation unit 33 (S201). Furthermore, the image generation unit 35 acquires the original image data 34a necessary for generating the simulation image from the original image storage unit 34 (S202).
- the image processing unit 35a After acquiring these various data and information, in the image generation unit 35, the image processing unit 35a generates a simulation image. That is, the image processing unit 35a performs image processing for adding blur, distortion, and the like according to the acquired lens design data to the acquired original image data 34a, and the spectacle lens wearer P1 schedules wearing. A simulation image reflecting the lens visual characteristic of the spectacle lens is generated (S203). As a result, the server apparatus 3 outputs simulation images for each partial visual field region (ie, for each small visual field) obtained by dividing the entire visual field region into a plurality of small visual fields, not for the entire visual field region of the spectacle lens. It becomes a state that can be performed.
- the server apparatus 3 outputs simulation images for each partial visual field region (ie, for each small visual field) obtained by dividing the entire visual field region into a plurality of small visual fields, not for the entire visual field region of the spectacle lens. It becomes a state that can be performed.
- the image superimposing unit 35b generates a contour image of the clarity index of the spectacle lens, and superimposes the contour image on the simulation image (S204).
- the server device 3 can output the simulation image for each partial visual field region in a state where the contour image of the clear index in the partial visual field region is superimposed.
- the control unit 37 reads out the explanatory information 36a for each partial visual field region from the explanatory information storage unit 36 (S205). ).
- the control unit 37 is an image of the simulation image for each partial visual field region generated by the image generation unit 35 on which the contour image is superimposed.
- the data and the comment information 36 a read from the comment information storage unit 36 are transmitted from the communication I / F unit 31 to the tablet terminal 1 via the communication line network 4.
- the server device 3 is requested to transmit image data regarding the original image in the entire visual field area.
- the server device 3 transmits image data for the original image of the entire visual field area.
- the transmission of the image data at this time may be performed by separately transmitting the image data for the original images of all the partial visual field areas constituting the entire visual field area, or there are overlapping image parts between the partial visual field areas. If present, the overlapping image portions may be transmitted after being combined so as to overlap.
- image data for the original image of the entire visual field can be prepared separately from the original image of each partial visual field
- the image data of the original image of the entire visual field may be transmitted as it is. Good.
- the tablet terminal 1 receives the image data by the communication I / F unit 11.
- the information output unit 12a of the touch panel unit 12 uses the predetermined portion on the display screen to display and output the transmitted original image for the entire visual field region (S206).
- the image display output at this time may be the original image of each partial visual field region displayed side by side as it is, or when there is an overlapping image part between the partial visual field regions, the overlapping image part is overlapped. And may be displayed in a synthesized form.
- the store clerk P2 can grasp the entire image related to the original image of the simulation image to be visually recognized by the spectacle lens wearer P1. Become. Details of the display output mode (including the position of a predetermined portion on the screen to be displayed and output) of the original image for the entire visual field at this time will be described later (see, for example, FIG. 5).
- the tablet terminal 1 determines whether or not any partial visual field area of the original image for the entire visual field area has been selected and designated by an operation performed by the clerk P2 using the operation unit 12b of the touch panel unit 12 ( S207). Specifically, for example, whether or not any partial visual field area constituting the entire visual field area is touch-operated by the store clerk P2 on the original image for the entire visual field area displayed and output by the information output unit 12a. Based on this, it is determined whether or not there is an area designation operation.
- the tablet terminal 1 If there is an area designation operation on the operation unit 12b of the touch panel unit 12, the tablet terminal 1 describes the image data of the simulation image of the selected partial visual field area (that is, a small visual field) and the partial visual field area.
- a request for transmission of the information 36a is sent to the server device 3, and the communication I / F unit 11 receives the information 36a when it is transmitted in response to this request.
- the information output unit 12 a of the touch panel unit 12 superimposes the contour image with the simulation image of the selected and specified partial visual field region (that is, the small visual field) separately from the original image for the entire visual field region.
- the image (hereinafter referred to as “simulation image for a small field of view”) is enlarged and displayed and output as compared with the display of the original image using a predetermined portion on the display screen (S208). Further, in the tablet terminal 1, the information output unit 12 a of the touch panel unit 12 explains the selected and specified partial visual field region (that is, small visual field) separately from the original image for the entire visual field region and the simulation image for the small visual field.
- the information 36a is displayed and output using a predetermined portion on the display screen (S209). Details of the display output mode (including the position of a predetermined portion on the screen to be displayed and output) of the simulation image and the comment information 36a for the small visual field at this time will be described later (see, for example, FIG. 5).
- the tablet terminal 1 communicates image data on a simulation image for a small visual field separately from the display output and the like on the touch panel unit 12 described above.
- the data is transmitted from the I / F unit 11 to the HMD 2.
- the communication I / F unit 21 receives the image data. Then, the display screen unit 22 of the HMD 2 performs display output of the transmitted simulation image for the small visual field (S210). This image display output is performed individually for each of the left and right eyes of the spectacle lens wearer P1. Accordingly, so-called 3D display can be performed on the spectacle wearer P1. By visually recognizing such a display output result, the spectacle lens wearer P1 has a simulated experience of the lens wearing state.
- the image display output by the display screen unit 22 at this time is not for the entire visual field region but for the simulation image for the small visual field.
- the entire viewing area through the spectacle lens is, for example, about 90 ° in the horizontal direction and about 70 ° in the vertical direction, whereas the display screen unit 22 corresponds to a viewing angle of about 50 ° in the diagonal direction. Even if it exists, the display screen unit 22 can perform the image display output without requiring reduction of the simulation image or the like.
- the HMD 2 causes the spectacle lens wearer P1 to visually recognize a simulation image for a small field of view so that the pseudo experience of the lens wearing state is displayed.
- the comment information 36a corresponding to the small visual field (partial visual field region) being output is displayed and output to the clerk P2. Therefore, even if the store clerk P2 does not completely store the lens visual characteristic of the small visual field (partial visual field region), if the display output result of the commentary information 36a is referred to, the salesperson P2 accurately determines the characteristic of the lens visual characteristic. Can be recognized.
- the spectacle lens wear planner P1 can sufficiently grasp the characteristics of the lens visual characteristics. It becomes possible. That is, by using the display output result of the commentary information 36a in the tablet terminal 1, the store clerk P2 stores the lens visual characteristics of the spectacle lens selected by the spectacle lens wearer P1. This makes it possible for the spectacle lens wearer P1 to appropriately and sufficiently grasp the correct information based on the commentary information 36a, not the ambiguous information based on the information.
- the tablet terminal 1 determines whether or not another partial visual field area of the original images for the entire visual field area has been selected and designated, that is, whether or not the selection and designation of the partial visual field area has been performed ( S211). Specifically, for example, on the original image corresponding to the entire visual field area displayed and output by the information output unit 12a, the salesclerk for a partial visual field area different from the small visual field area (partial visual field area) being displayed and output by the HMD 2 The presence / absence of a region switching operation is determined based on whether or not a touch operation has been performed on the operation unit 12b of the touch panel unit 12 by P2.
- the tablet terminal 1 again relates to the partial visual field region (that is, a small visual field) newly selected and designated by the switching operation in the tablet terminal 1 and the HMD 2 again.
- the series of steps is repeated (S208 to S211).
- the spectacle lens wearer P1 is caused to determine the result of the simulated experience in the lens wearing state (see S104 in FIG. 3).
- the simulation system assists the store clerk P2 to explain the spectacle lens to the spectacle lens wearer P1. That is, in the spectacle store S, an image display step for selectively viewing a simulation image for each partial visual field region for the spectacle lens wearer P1 using the HMD2 and display output on the HMD2 using the tablet terminal 1
- the information output step for outputting the comment information 36a corresponding to the partial visual field area being switched and the selection of the partial visual field area to be displayed by the HMD 2 are switched and the comment information 36a output by the tablet terminal 1 is switched correspondingly.
- the simulation system assists the explanation by the store clerk P2 of the spectacle store S through the selection switching step in order.
- FIG. 5 is a conceptual diagram showing a specific example of display output contents on the tablet terminal according to the present embodiment.
- the original image 13a for the entire visual field area is displayed and output in a partial area on the upper left side of the screen.
- This original image 13a shows an example in which each partial visual field region divided into nine regions is displayed and output in a manner arranged in 3 rows ⁇ 3 columns so as to reproduce the state before the division. .
- the original image 13a for the entire visual field area is displayed in a partial area below the display area of the original image 13a for the entire visual field area (that is, lower and left of the screen).
- the partial visual field area selected and designated in (2) is displayed and output in an enlarged state as a simulation image 13b for a small visual field.
- the small visual field 13b displayed and output here is selected and designated by the store clerk P2, and is displayed and output on the display screen unit 22 of the HMD2. Therefore, the store clerk P2 appropriately and easily grasps which partial visual field region the display screen unit 22 of the HMD 2 performs display output by referring to the simulation image 13b for the small visual field that is displayed and output. be able to.
- the contour image of the clear index is superimposed on the simulation image 13b for the small visual field, it is possible to easily and clearly grasp the characteristics of the visual characteristics of the lens.
- a character image 13c about commentary information corresponding to the partial visual field area selected and specified in the original image 13a for the entire visual field area is displayed in a partial area on the right side of the screen.
- the character image 13c for the commentary information displayed and output here is an image representing characters describing the characteristics of the lens visual characteristics of the partial visual field region displayed and output as the simulation image 13b for a small visual field. Specifically, for example, characters such as “XX portion has XX characteristics” and “ ⁇ portion has a wide application range of ⁇ ” are displayed and output as a character image 13c regarding the commentary information. . Therefore, the store clerk P2 refers to the character image 13c about the commentary information displayed and output, and what lens visual characteristic characteristic is present in the partial visual field region displayed and output by the display screen unit 22 of the HMD2. It can be accurately recognized whether it has.
- the simulation image 13b for the small visual field and the character image 13c for the commentary information corresponding thereto are switched according to the touch operation using the original image 13a for the entire visual field region. For example, when one partial visual field area in the original image 13a for the entire visual field area is touched, a simulation image 13b for a small visual field corresponding to this and a character image 13c for commentary information are displayed and output. When the other partial visual field region is touch-operated, the display output contents are switched to the simulation image 13b corresponding to the small visual field and the character image 13c regarding the explanation information. That is, the simulation image 13b for the small visual field and the character image 13c for the explanation information are selectively displayed and output for each partial visual field region.
- Such display output contents differ depending on the design type of the spectacle lens. That is, if the type of the lens design standard applied by the lens design data generation unit 33 is different, the simulation image generation result in the image processing unit 35a performed based on the lens design data generated by the lens design data generation unit 33 is also different. It will be. Therefore, on the display screen of the touch panel unit 12, the original image 13 a for all different visual field regions, the corresponding simulation image 13 b for the small visual field, and the character image 13 c for the commentary information corresponding to the design type of the spectacle lens. Will be displayed and output. This means that the display screen of the touch panel unit 12 can be switched according to the design type of the spectacle lens.
- the output layout on the display screen given here as an example is merely a specific example. That is, the output layout on the display screen of the touch panel unit 12 is not particularly limited as long as it is appropriately set in advance.
- FIG. 6 is an explanatory diagram illustrating a specific example of an original image that is a basis for generating a simulation image.
- the original image in the figure is obtained by dividing the entire visual field region of the spectacle lens into nine partial visual field regions (small visual fields). By this division, when the spectacle lens is a progressive power lens, the right part, the center part, the left part, the right part of the near vision area, the center part, the left part, the right part of the intermediate vision area, the center The part and the left part belong to different partial visual field regions. Each of these partial visual field areas has an image portion that overlaps between adjacent partial visual field areas.
- the original image about the center part of the near vision region assumes that the spectacle lens wearing person P1 lifts and reads the paper on which the characters are written.
- FIG. 7 is an explanatory diagram illustrating a specific example of a simulation image obtained by performing image processing on an original image.
- the simulation image in the example is obtained by performing image processing on each of the original images for each partial visual field area shown in FIG.
- FIG. 8 is an explanatory diagram showing a specific example of an image representing contour lines of a clearness index of a spectacle lens.
- the contour image in the example corresponds to the division of the original image for each partial visual field area shown in FIG. 6, and is an image with different display brightness for each distinctness index of the spectacle lens.
- a boundary portion between the region portion having the same display brightness and the region portion having another display brightness adjacent thereto corresponds to the contour line of the clear index.
- the spectacle lens has, for example, the spherical power S2.00, the astigmatism power C-1.00, the astigmatism axis Ax180, the addition power Add2.50, the progressive zone length 14 mm, and the distance between the pupils.
- FIG. 9 is an explanatory diagram illustrating a specific example of a simulation image for each partial visual field region on which the contour image of the clearness index is superimposed.
- the example image is obtained by superimposing the contour image of FIG. 8 on the simulation image of FIG. Also, the brightness of the image is adjusted by the value of the clearness index.
- a simulation image on which such a contour image is superimposed is selectively displayed and output for each partial visual field region.
- the description information 36a corresponding to the small visual field (partial visual field region) displayed and output by the display screen unit 22 of the HMD 2 is provided to the tablet terminal 1 when causing the spectacle lens wearer P1 to experience the lens wearing state in a simulated manner.
- the information output unit 12a displays and outputs to the clerk P2. That is, comment information 36a regarding the characteristics of the visual characteristics of the lens is prepared in advance for each design type and for each partial visual field region, and comment information 36a corresponding to the display contents according to the display of the simulation image for the spectacle lens wearer P1. Is output.
- the spectacle lens wear planner P1 can sufficiently grasp the characteristics of the lens visual characteristics. It becomes possible. Therefore, according to the present embodiment, for example, in the case of a progressive power lens having an individually designed free-form surface, the characteristics of lens visual characteristics for each spectacle lens are several hundreds depending on the design type.
- the spectacle lens wearer P1 fully understands the difference in the characteristics of the visual characteristics of the lens regardless of the skill of the clerk P2, and the suitability of the spectacle experience person P1 for the result of the pseudo-experience is determined. Since the determination can be made, it is possible to give satisfaction to the spectacle lens wearer P1.
- the display screen unit 22 of the HMD 2 selectively displays the simulation image for each small field of view (partial field of view region). Let the lens wearer P1 visually recognize.
- the display of the simulation image is not performed collectively for the entire visual field region of the spectacle lens, but is selectively performed for each of the partial visual field regions constituting the entire visual field region. Is about 90 ° in the horizontal direction and about 70 ° in the vertical direction, for example, even if the display screen unit 22 corresponds to a viewing angle of about 50 ° in the diagonal direction, the simulation image can be reduced.
- the image display output can be performed without necessity.
- the entire visual field of the spectacle lens is clearly shown to the spectacle lens wearer P1 by switching the selected region. It is also possible to make it visible. Therefore, according to the present embodiment, it is not necessary to be able to reproduce the entire visual field region of the spectacle lens at once on the display screen unit 22, and the spectacle lens wearer P1 is required to wear the lens using the small, light and inexpensive HMD2.
- the simulation apparatus 5 it is possible to make a pseudo-experience of the state, and even in that case, it is possible to make the spectacle lens wearer P1 clearly see the entire visual field area by switching the display area. It becomes possible to eliminate the dissatisfaction that the spectacle store S side would feel.
- the simulation system assists the salesclerk P2 in explaining the product regarding the spectacle lens to the spectacle lens wearer P1. That is, by using the display output result of the commentary information 36a in the tablet terminal 1, the store clerk P2 stores the lens visual characteristics of the spectacle lens selected by the spectacle lens wearer P1. This makes it possible for the spectacle lens wearer P1 to properly and sufficiently grasp the correct information based on the commentary information 36a, not the ambiguous information based on the information. Therefore, according to the present embodiment, the store clerk P2 is not required to have high skills and the like, and the spectacle lens wearer P1 can sufficiently grasp the characteristics of the lens visual characteristics of the spectacle lens.
- the dissatisfaction that the spectacle lens wearer P1 or the spectacle store S side may feel is solved, and satisfaction is felt for each. To be able to give.
- the simulation image when the simulation image is displayed and output, an image in which contour lines of the clear index of the spectacle lens are superimposed is displayed and output. Therefore, as compared with the case where no contour image is superimposed, it is easier to grasp the characteristics of the lens visual characteristics.
- This is particularly effective when the resolution of the display screen unit 22 of the HMD 2 is not sufficient. This is because if the resolution of the display screen unit 22 is not sufficient, the blur / distortion reflected in the simulation image may not be completely reproduced, but if the contour image is superimposed, it is completely reproduced. This is because the impossible portion can be supplemented by contour lines.
- the spectacle lens wearing person P1 can have a simulated experience of the lens wearing state using the small, light and inexpensive HMD2. Furthermore, if the contour image is superimposed, it is possible to make a subtle difference in lens visual characteristics between the spectacle lenses due to the difference in the superimposed contour image.
- the simulation image is displayed on the spectacle lens wearer P1 by using the HMD2 attached to the head of the spectacle lens wearer P1, and the display screen unit 22 of the HMD2 is the spectacles. This is performed separately for each of the left and right eyes of the prospective lens wearer P1. Therefore, according to the present embodiment, it is possible to wear glasses glasses with different prescriptions for the left and right eyes, while appropriately dealing with the characteristics unique to the glasses lenses, while providing a simulated experience with the lens wearer P1 who is scheduled to wear glasses lenses. It becomes possible to make it. In addition, since the simulation image can be displayed easily in the so-called 3D display, it is possible to display a realistic image on the spectacle lens wearer P1 and the spectacle lens wearer P1.
- the display output of the comment information 36a is performed on the tablet terminal 1 used by the store clerk P2. That is, the tablet terminal 1 is provided with an information output unit 12a that performs display output of the comment information 36a. Therefore, according to the present embodiment, the clerk P2 can explain the product of the spectacle lens to the spectacle lens wearer P1 while holding the tablet terminal 1 excellent in portability, so for the clerk P2 It will be very easy to use. Further, for the display output of the comment information 36a, etc., since the tablet terminal 1 generally uses an information output function or the like, the tablet terminal 1 can be configured using a general-purpose product, and the simulation apparatus 5 Can also contribute to cost reduction.
- the store clerk P2 performs a selection operation (including both an area designation operation and an area switching operation) for the small visual field (partial visual field area) to be displayed on the HMD 2 on the tablet terminal 1. That is, the tablet terminal 1 is provided with an operation unit 12b for the store clerk P2 to perform a selection operation. Therefore, according to this embodiment, the salesclerk P2 using the tablet terminal 1 can appropriately and easily grasp which small visual field (partial visual field region) the HMD 2 is performing display output. For P2, it is easy to explain the spectacle lens product to the spectacle lens wearer P1.
- the tablet terminal 1 used on the side of the spectacle store S communicates with the server device 3 via the communication line network 4 and acquires a simulation image and commentary information 36a from the server device 3. .
- the tablet terminal 1 is provided with a communication I / F unit 11 for acquiring a simulation image and comment information 36a
- the server device 3 includes an image generation unit 35 for generating a simulation image and comment information 36a in advance.
- a comment information storage unit 36 for storing and holding is provided. Therefore, according to the present embodiment, in the simulation system, the generation of a simulation image with a large processing load and the storage and holding of the explanation information 36a that requires a large storage capacity are concentrated on the server device 3 side having a high processing capacity.
- the tablet terminal 1 and the HMD 2 constituting the simulation device 5 used on the spectacle store S side do not require a high processing capability to perform simulation image generation or the like, the cost for constructing the simulation system It can also contribute to reduction. This is particularly effective when a plurality of tablet terminals 1 and HMDs 2 (that is, a plurality of spectacle stores S) are connected to the server device 3.
- the tablet terminal 1 includes an information input unit 12c that inputs spectacle lens parameter information, and the server device 3 specifies the type of lens design standard to be applied based on the parameter information.
- a lens design data generation unit 33 that generates lens design data of the spectacle lens is provided. Therefore, according to the present embodiment, on the server device 3 side, it is possible to perform a series of processes from generation of lens design data to generation of a simulation image. That is, the server apparatus 3 can improve the efficiency of processing execution.
- the server device 3 from the viewpoint of the spectacle store S, if the parameter information is input or the like, a simulation image or the like is sent from the server device 3, so the spectacle lens wearer P1 and the clerk P2 of the spectacle store S It will be highly convenient for each.
- FIG. 10 is a flowchart showing an outline of simulation processing in the second embodiment.
- parameter information including prescription information of spectacle lenses scheduled to be worn by the spectacle lens wearer P1 and shape information of spectacle frames holding the spectacle lenses is input by the information input unit 12c of the tablet terminal 1.
- the parameter information is transmitted to the server device 3 via the communication line network 4.
- the acquisition information recognition unit 32 acquires frame shape data specifying the frame shape of the spectacle frame based on the spectacle frame shape information included in the parameter information (S302).
- the lens design data generation unit 33 is based on the recognition result of the prescription information or the like in the acquired information recognition unit 32, and the spectacle lens that the spectacle lens wearer P1 plans to wear (that is, the spectacle lens corresponding to the determined prescription etc.) ) Lens design data is generated (S303).
- the image generation unit 35 When the lens design data generation unit 33 generates the lens design data, in the server device 3, the image generation unit 35 generates a simulation image reflecting the lens visual characteristic specified by the lens design data (S304). Further, the image generation unit 35 generates a frame frame image of the eyeglass frame based on the frame shape data acquired by the acquisition information recognition unit 32, and reflects this in the simulation image (S305). Specifically, the frame image of the spectacle frame is superimposed on the simulation image to reflect the simulation image. Then, the server device 3 transmits image data of the simulation image reflecting the frame image to the spectacle store S side from the communication I / F unit 31 via the communication line network 4. The subsequent processing (S306 to S308) is the same as in the first embodiment (see FIG. 3).
- FIG. 11 is a flowchart showing details of a characteristic procedure of simulation processing in the second embodiment.
- the image generation unit 35 of the server device 3 When generating the simulation image (see S304 and S305 in FIG. 10), the image generation unit 35 of the server device 3 generates the lens design data generated by the lens design data generation unit 33 and the lens design data generation unit when generating the data. Identification information about the type of lens design standard applied by the lens 33 is acquired from the lens design data generator 33 (S401). Further, the image generation unit 35 acquires the original image data 34a necessary for generating the simulation image from the original image storage unit 34 (S402). Further, the image generation unit 35 acquires frame shape data for the spectacle frame holding the spectacle lens from the acquisition information recognition unit 32 (S403).
- the image processing unit 35a After acquiring these various data and information, in the image generation unit 35, the image processing unit 35a generates a simulation image. That is, the image processing unit 35a performs image processing for adding blur, distortion, and the like according to the acquired lens design data to the acquired original image data 34a, and the spectacle lens wearer P1 schedules wearing. A simulation image reflecting the lens visual characteristic of the spectacle lens is generated (S404). As a result, the server apparatus 3 outputs simulation images for each partial visual field region (ie, for each small visual field) obtained by dividing the entire visual field region into a plurality of small visual fields, not for the entire visual field region of the spectacle lens. It becomes a state that can be performed.
- the server apparatus 3 outputs simulation images for each partial visual field region (ie, for each small visual field) obtained by dividing the entire visual field region into a plurality of small visual fields, not for the entire visual field region of the spectacle lens. It becomes a state that can be performed.
- the image superimposing unit 35b generates a contour image of the clearness index of the spectacle lens, and superimposes the contour image on the simulation image (S405).
- the server device 3 can output the simulation image for each partial visual field region in a state where the contour image of the clear index in the partial visual field region is superimposed.
- the image generation unit 35 generates a frame frame image of the eyeglass frame based on the frame shape data acquired by the acquisition information recognition unit 32. Then, the process of reflecting the generated frame frame image on the simulation image is performed (S406). The reflection process can be performed as described below, for example.
- the image generation unit 35 specifies the shape of the frame of the spectacle frame based on the frame shape data. Further, the image generation unit 35 uses the intercorneal apex distance (wearing distance) defined for the spectacle lens that the spectacle lens wearer P1 plans to wear, to which position on the simulation image the frame of the spectacle frame is located. Identify what will be distributed.
- the image generation unit 35 generates a frame frame image of the spectacle frame that will be seen by the spectacle lens wearer P1 while using these identification results, and the frame frame image is simulated for each partial visual field region. By overlapping the image, it is reflected on the simulation image.
- the server device 3 outputs the simulation image for each partial visual field region in a state in which the frame frame image of the spectacle frame that is likely to be seen in the partial visual field region is reflected (that is, in a superimposed state). It will be in a state that can be performed.
- the control unit 37 reads out the comment information 36a for each partial visual field from the comment information storage unit 36 (S407). Then, as necessary (for example, in response to a request from the tablet terminal 1), the control unit 37 superimposes the contour line image on the simulation image for each partial visual field generated by the image generation unit 35 and reflects the frame frame image.
- the image data of the information and the comment information 36 a read from the comment information storage unit 36 are transmitted from the communication I / F unit 31 to the tablet terminal 1 via the communication line network 4.
- the subsequent processing (S408 to S413) is the same as that in the first embodiment (see FIG. 4).
- FIG. 12 is an explanatory diagram illustrating a specific example in which a frame frame image is reflected in a simulation image obtained by performing image processing on an original image.
- the simulation image in the example is obtained by performing image processing on each of the original images for each partial visual field area shown in FIG.
- the frame image 50 corresponding to the frame of the spectacle frame is reflected in the simulation image of the example for each partial visual field region.
- the frame image 50 is superimposed on the simulation image.
- the spectacle lens wearer P1 who has seen the spectacle lens wears the spectacle frame (that is, the frame frame enters the field of view), and the lens visual characteristics of the spectacle lens (the frame frame). The visual characteristics inside) can be easily and accurately grasped.
- FIG. 13 is an explanatory diagram showing a specific example in which a frame frame image is reflected on an image representing contour lines of the clearness index of the spectacle lens.
- the contour image in the example corresponds to the division of the original image for each partial visual field area shown in FIG. 6, and is an image with different display brightness for each distinctness index of the spectacle lens.
- a boundary portion between the region portion having the same display brightness and the region portion having another display brightness adjacent thereto corresponds to the contour line of the clear index.
- the spectacle lens has, for example, the spherical power S2.00, the astigmatism power C-1.00, the astigmatism axis Ax180, the addition power Add2.50, the progressive zone length 14 mm, and the distance between the pupils.
- FIG. 14 is an explanatory diagram showing a specific example in which a frame frame image is reflected in a simulation image for each partial visual field region on which a contour image of a clear index is superimposed.
- the image in the figure is an image obtained by superimposing the contour image in FIG. 13 on the simulation image in FIG. 12, and further reflects the frame frame image 50 corresponding to the frame of the eyeglass frame. Further, the brightness of the image of the example image is adjusted by the value of the clearness index.
- a simulation image in which such a contour image is superimposed and the frame frame image 50 is reflected on the display screen unit 22 of the HMD 2 is selectively displayed and output for each partial visual field region.
- the spectacle lens wearer P1 who sees the display output can easily and accurately grasp the lens visual characteristic of the spectacle lens in the state of wearing the spectacle frame, and the frame of the spectacle frame.
- the distribution of the clearness index of the spectacle lens with respect to the position can be easily and accurately grasped.
- the image generation unit 35 of the server device 3 performs a process of reflecting the frame frame image 50 of the eyeglass frame on the simulation image, and the simulation image reflecting the frame frame image 50 is displayed on the display screen unit 22 of the HMD 2. Is displayed. Therefore, for the spectacle lens wearer P1, the lens visual characteristic of the spectacle lens can be grasped while reproducing the state of wearing the spectacle frame, and the lens wearing state is simulated compared to the case where the frame frame image 50 is not reflected. Since the experience can be performed more appropriately, it is very convenient. This is very effective especially when the simulation image is selectively displayed for each small visual field (partial visual field region).
- the frame portion of the spectacle frame enters the field of view for the partial visual field area located on the peripheral side of the entire visual field area, even when displaying a simulation image of such a partial visual field area, This is because by reflecting the frame frame image 50, the state can be appropriately reproduced and the simulated experience of the lens wearing state can be optimized.
- the display device that displays the simulation image is the HMD2
- any display other than the HMD2 may be used as long as the simulation image can be visually recognized by the spectacle lens wearer P1.
- the simulation image may be displayed using an apparatus (for example, a stationary display apparatus).
- the terminal device that outputs the comment information 36a is the tablet terminal 1
- the terminal device can perform information input / output with respect to the clerk P2
- the tablet The comment information 36a may be output using a terminal device other than the terminal 1 (for example, a notebook or desktop personal computer).
- the explanation information 36a which is character information, is displayed and output to the store clerk P2 on the tablet terminal 1, and the store clerk P2 reads out the display output result of the description information 36a.
- the description information 36a may be output in any manner as long as the store clerk P2 or the spectacle lens wearer P1 can grasp the information. Absent. For example, if the terminal device that outputs the comment information 36a has a sound output function, the comment information 36a is output as a sound so that the store clerk P2 or the spectacle lens wearer P1 can grasp it. It is also possible.
- the tablet terminal 1 receives image data about a simulation image from the server device 3 and sends the image data to the HMD 2 to cause display output on the HMD 2 to be performed. It is also possible to exchange image data directly with. That is, it is only necessary that at least one of the tablet terminal 1 or the HMD 2 configuring the simulation device 5 is configured to communicate with the server device 3 on the communication line network 4.
- the server device 3 existing on the communication network 4 is provided with an image generation unit 35 that generates a simulation image and a comment information storage unit 36 that stores comment information 36 a in advance.
- the simulation apparatus 5 used on the side may be held in advance. That is, at least one of the tablet terminal 1 or the HMD 2 constituting the simulation device 5 includes an information storage unit that stores a simulation image for each partial visual field region and commentary information 36a, and performs simulation from the information storage unit as necessary. It is also conceivable that the image and the comment information 36a are extracted and output.
- the server apparatus 3 includes the lens design data generation unit 33, and the case where the process from the generation of the lens design data to the generation of the simulation image is performed as a series of processes has been described as an example. It is also possible to generate the design data and the simulation image by completely different devices.
- the case where the entire visual field region of the spectacle lens is divided into nine partial visual field regions (small visual fields) in generating and displaying the simulation image has been described as an example. There are no particular restrictions on the number, mode, etc., as long as they are predetermined. Furthermore, in each embodiment, the case where the partial image area (small visual field) is divided at the stage of the original image data 34a stored and held in the original image storage unit 34 is described as an example. It suffices that the partial visual field region (small visual field) is divided by the stage of the simulation image generation by 35a.
- the original image storage unit 34 stores and holds the original image data 34a for the original image of the entire visual field
- the partial visual field is extracted while extracting the original image of the partial visual field from the entire visual field. It is also conceivable to generate a simulation image for a region.
- the original image 13a for the entire visual field region, the simulation image 13b for the enlarged small visual field, and the character image 13c for the commentary information are displayed.
- the output layout on the display screen is not particularly limited, and it is conceivable that the individual display contents are appropriately changed as necessary.
- the simulation image 13b for a small field of view has been described as an example in which a contoured image of a spectacle index of a spectacle lens is superimposed and output.
- the contour image is not necessarily superimposed, and the contour line is not essential.
- the image processing unit 35a generates a simulation image by performing image processing that reflects lens visual characteristics (such as blur and distortion) on the original image data 34a has been described as an example.
- image processing includes image processing that provides effects such as polarization and light control. If such image processing is performed incidentally, a simulation image close to a natural image can be generated even if the original image data 34a is for a CG image.
- SYMBOLS 1 Tablet terminal (terminal device), 2 ... HMD (display apparatus), 11 ... Communication I / F part, 12 ... Touch panel part, 12a ... Information output part, 12b ... Operation part, 12c ... Information input part, 13a ... All Original image for visual field area, 13b ... Simulation image for small visual field, 13c ... Image for commentary information, 21 ... Communication I / F unit, 22 ... Display screen unit, 3 ... Server device, 4 ... Communication network, 5 DESCRIPTION OF SYMBOLS ... Simulation apparatus, 31 ... Communication I / F part, 32 ... Acquisition information recognition part, 33 ... Lens design data generation part, 34 ... Original image storage part, 34a ...
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Optics & Photonics (AREA)
- Heart & Thoracic Surgery (AREA)
- Mathematical Physics (AREA)
- Signal Processing (AREA)
- Economics (AREA)
- General Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- Development Economics (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Eyeglasses (AREA)
Abstract
Description
この目的達成のために、本願発明者は、先ず、眼鏡レンズのレンズ視覚特性の特徴について検討した。レンズ視覚特性の特徴は、例えば個別設計の自由曲面を有した累進屈折力レンズであれば、眼鏡レンズの設計タイプ毎に異なり、しかも遠方視領域、近方視領域、中間視領域等といった各領域間でも異なる。そのため、組み合わせにより数百種類にも及ぶ特徴の違いを全て把握することは、誰にとっても非常に困難であると考えられる。
このことを踏まえた上で、本願発明者は、さらに鋭意検討を重ねた。そして、本願発明者は、レンズ設計タイプ毎および各領域間で異なるレンズ視覚特性について、その特徴に関する解説情報を設計タイプ別および各領域別に予め用意しておき、眼鏡レンズ装用予定者に対するシミュレーション画像の表示に合わせてその表示内容に対応する解説情報の出力を行えば、レンズ視覚特性の特徴の把握を的確かつ容易に行い得るとの着想に至った。しかも、シミュレーション画像の表示を、眼鏡レンズの全視野領域について一括して行うのではなく、当該全視野領域を複数の部分視野領域に分け、その複数の部分視野領域別に選択的に行うようにすれば、該当する解説情報の出力を簡便に行い得るとともに、画像表示側の視野の大きさや解像度等に因らずに眼鏡レンズの全視野領域を明瞭に視認させ得るとの着想に至った。
本発明は、上述した本願発明者による新たな着想に基づいてなされたものである。
本発明の第2の態様は、第1の態様に記載の発明において、前記サーバ装置の前記画像生成部は、前記眼鏡レンズの明瞭指数の等高線の前記シミュレーション画像への重畳処理を行うものであり、前記ディスプレイ装置の前記表示画面部は、前記等高線が重畳された前記シミュレーション画像を表示するものであることを特徴とする。
本発明の第3の態様は、第1または第2の態様に記載の発明において、前記サーバ装置の前記画像生成部は、前記眼鏡レンズを保持する眼鏡フレームの枠画像の前記シミュレーション画像への反映処理を行うものであり、前記ディスプレイ装置の前記表示画面部は、前記枠画像が反映された前記シミュレーション画像を表示するものであることを特徴とする。
本発明の第4の態様は、第1、第2または第3の態様に記載の発明において、前記ディスプレイ装置は、前記眼鏡レンズ装用予定者の頭部に装着されるヘッドマウントディスプレイであり、前記表示画面部は、前記眼鏡レンズ装用予定者の左右眼のそれぞれに対して個別に画像表示を行うことを特徴とする。
本発明の第5の態様は、第1から第4のいずれか1態様に記載の発明において、前記端末装置は、前記眼鏡店の店員が使用する携帯型の情報端末であり、前記情報出力部は、前記解説情報を前記店員に対して表示出力することを特徴とする。
本発明の第6の態様は、第1から第5のいずれか1態様に記載の発明において、前記情報出力部は、前記解説情報を音声出力することを特徴とする。
本発明の第7の態様は、第1から第6のいずれか1態様に記載の発明において、前記ディスプレイ装置または前記端末装置の少なくとも一方には、前記ディスプレイ装置に表示させる部分視野領域の選択操作を行うための操作部が設けられていることを特徴とする。
本発明の第8の態様は、第1から第7のいずれか1態様に記載の発明において、前記端末装置は、前記眼鏡レンズ装用予定者が装用を予定する眼鏡レンズについてのパラメータ情報を入力する情報入力部を備え、前記サーバ装置は、前記情報入力部で入力された前記パラメータ情報に基づき、前記眼鏡レンズ装用予定者が装用を予定する眼鏡レンズに適用すべきレンズ設計基準の種類を特定し、その特定した種類のレンズ設計基準を適用しつつ当該眼鏡レンズのレンズ設計データを生成するデータ生成部を備えることを特徴とする。
本発明の第9の態様は、眼鏡店で用いられる端末装置と、前記眼鏡店に訪れた眼鏡レンズ装用予定者が視認するディスプレイ装置とが、通信可能に接続されて構成されたシミュレーション装置であって、前記ディスプレイ装置は、眼鏡レンズの全視野領域を構成する複数の部分視野領域についての元画像に対して当該眼鏡レンズのレンズ視覚特性を反映させる画像処理を行って得られたシミュレーション画像を、前記複数の部分視野領域別に選択的に表示して、前記眼鏡レンズ装用予定者に視認させる表示画面部を備え、前記端末装置は、前記ディスプレイ装置の前記表示画面部が表示する部分視野領域に反映された前記レンズ視覚特性の特徴に関する解説情報を出力する情報出力部を備えることを特徴とするシミュレーション装置である。
本発明の第10の態様は、第9の態様に記載の発明において、前記端末装置または前記ディスプレイ装置の少なくとも一方は、通信回線網に接続し、当該通信回線網上のサーバ装置との通信を行う通信インターフェイス部を備え、前記通信インターフェイス部を介して少なくとも前記部分視野領域別の前記シミュレーション画像および前記解説情報を前記サーバ装置から取得するように構成されていることを特徴とする。
本発明の第11の態様は、第9の態様に記載の発明において、前記端末装置または前記ディスプレイ装置の少なくとも一方は、前記部分視野領域別の前記シミュレーション画像および前記解説情報を記憶する情報記憶部を備えることを特徴とする。
本発明の第12の態様は、眼鏡店で用いられる端末装置と、前記眼鏡店に訪れた眼鏡レンズ装用予定者が視認するディスプレイ装置とを利用しつつ、前記眼鏡店での商品説明の際に当該商品説明を補助する商品説明補助方法であって、眼鏡レンズの全視野領域を構成する複数の部分視野領域についての元画像に対して当該眼鏡レンズのレンズ視覚特性を反映させる画像処理を行って得られたシミュレーション画像を、前記複数の部分視野領域別に選択的に前記ディスプレイ装置で表示して、前記眼鏡レンズ装用予定者に視認させる画像表示ステップと、前記ディスプレイ装置で表示する部分視野領域に反映された前記レンズ視覚特性の特徴に関する解説情報を、当該部分視野領域と当該解説情報とが対応付けられた状態で予め記憶している情報記憶部から取得して、前記端末装置で出力する情報出力ステップと、前記ディスプレイ装置で表示する部分視野領域の選択を切り替えるとともに、これに対応して前記端末装置で出力する解説情報の切り替えを行う選択切替ステップと、を備えることを特徴とする商品説明補助方法である。
本実施形態では、以下の順序で項分けをして説明を行う。
A.第1実施形態
1.シミュレーションシステム全体の概略構成
2.シミュレーションシステムの機能構成
3.眼鏡店におけるシミュレーション処理の手順
4.本実施形態の効果
B.第2実施形態
C.変形例等
はじめに、本発明の第1実施形態を説明する。
先ず、本実施形態におけるシミュレーションシステム全体の概略構成を説明する。
図1は、本実施形態におけるシミュレーションシステム全体の概略構成例を示す模式図である。
また、ディスプレイ装置2としては、例えば、眼鏡レンズ装用予定者P1の頭部に装着されるヘッドマウントディスプレイ(以下「HMD」という。)が用いられる。本実施形態では、ディスプレイ装置2がHMDである場合を例に挙げて、以下の説明を行う。
なお、眼鏡店Sにおいては、タブレット端末1とHMD2が用いられるが、これらによって後述するシミュレーション装置が構成されることになる。
続いて、本実施形態におけるシミュレーションシステムの機能構成を説明する。
図2は、本実施形態におけるシミュレーションシステムの機能構成例を示すブロック図である。
サーバ装置3は、眼鏡レンズを通した見え方を眼鏡レンズ装用予定者P1に疑似体験させるために、眼鏡レンズのレンズ視覚特性を反映させたシミュレーション画像の生成、生成したシミュレーション画像のシミュレーション装置5への送信、その他必要な処理を行うものである。そのために、サーバ装置3は、通信インターフェイス(以下、インターフェイスを「I/F」と略す。)部31、取得情報認識部32、レンズ設計データ生成部33、原画像記憶部34、画像生成部35、解説情報記憶部36、および、制御部37としての機能を備えて構成されている。
また、原画像記憶部34は、眼鏡レンズの全視野領域を構成する複数の部分視野領域についての元画像の画像データを、原画像データ34aとして記憶保持しているものとする。ただし、少なくとも複数の部分視野領域について元画像の画像データを記憶保持していれば、原画像記憶部34は、さらに他の画像データを記憶保持していてもよい。
ここで、眼鏡レンズの「全視野領域」とは、眼鏡レンズを通して見た場合の全視野に相当する領域のことをいう。「全視野」とは、眼鏡レンズを通して視認可能な視野角の範囲のことをいい、例えば水平方向約90°、垂直方向約70°の範囲のことをいう。
全視野領域を構成する複数の「部分視野領域」とは、全視野領域を予め設定された区分け態様に応じて区分けした場合のそれぞれの領域のことをいう。各部分視野領域への区分けは、眼鏡レンズのレンズ視覚特性の特徴の違いを考慮して行うことが考えられる。具体例としては、例えば眼鏡レンズが累進屈折力レンズであれば、全視野領域を9つの領域に区分けし、遠方視領域の右側部分、中央部分、左側部分、近方視領域の右側部分、中央部分、左側部分、中間視領域の右側部分、中央部分、左側部分のそれぞれが別領域に属するようにしたものが挙げられる。ただし、これら複数の部分視野領域は、それぞれが全視野領域の一部分に相当するものであればよく、各部分視野領域が互いに重複する画像部分を有していてもよい。
このような複数の部分視野領域について、それぞれの画像データを記憶保持しておくことで、原画像記憶部34は、眼鏡レンズの全視野領域について一括して画像出力を行うのではなく、当該全視野領域を複数の小視野に区分けした各部分視野領域別に画像出力することを可能にするのである。
画像処理部35aは、レンズ設計データ生成部33が生成したレンズ設計データに基づき、原画像記憶部34に記憶保持された原画像データ34aに対して、そのレンズ設計データによって特定される視覚特性(ボケ・ゆがみ等)を反映させる画像処理を行う機能を実現するものである。この画像処理によって、各部分視野領域についての元画像に対して、眼鏡レンズ装用予定者P1が装用を予定する眼鏡レンズのレンズ視覚特性を反映させたシミュレーション画像が生成されることになる。なお、画像処理によるシミュレーション画像生成の詳細については、公知技術に基づくものであるため(例えば国際公開第2010/044383号参照)、ここではその説明を省略する。
画像重畳部35bは、レンズ設計データ生成部33が生成したレンズ設計データに基づき、眼鏡レンズ装用予定者P1が装用を予定する眼鏡レンズの明瞭指数を求めるとともに、その明瞭指数の等高線を表す画像を生成し、その等高線の画像を画像処理部35aでの画像処理によって得られたシミュレーション画像に重畳する機能を実現するものである。ここでいう「明瞭指数」は、眼鏡レンズ(特に累進屈折力レンズ)の性能を評価する指標の一つをいう。ただし、明瞭指数の詳細については、公知技術に基づくものであるため(例えば特許第3919097号参照)、ここではその説明を省略する。
一方、シミュレーション装置5は、眼鏡レンズを通した見え方を眼鏡レンズ装用予定者P1に疑似体験させるために、眼鏡店Sの側において利用されるものであり、具体的にはHMD2とタブレット端末1によって構成されている。
HMD2は、眼鏡店Sを訪れた眼鏡レンズ装用予定者P1の頭部に装着された状態でシミュレーション画像の表示出力を行うことで、その眼鏡レンズ装用予定者P1に眼鏡レンズを通した見え方を疑似体験させるものである。そのために、HMD2は、通信I/F部21および表示画面部22としての機能を備えて構成されている。
タブレット端末1は、眼鏡店Sの店員P2が携帯して操作するもので、眼鏡レンズを通した見え方を眼鏡レンズ装用予定者P1に疑似体験させるために必要となる情報の入出力を行うものである。そのために、タブレット端末1は、通信I/F部11およびタッチパネル部12としての機能を備えて構成されている。
情報出力部12aは、タッチパネル部12の情報出力機能を利用しつつ店員P2に対して各種情報の表示出力する機能を実現するものである。情報出力部12aが表示出力する各種情報には、サーバ装置3の解説情報記憶部36が記憶保持する解説情報36aが含まれる。つまり、情報出力部12aは、解説情報記憶部36内の解説情報36aを当該解説情報記憶部36から取得して、店員P2に対して表示出力する機能を有している。ただし、情報出力部12aは、HMD2の表示画面部22が表示する部分視野領域に対応する解説情報36aについて、その表示出力を行うようになっている。
操作部12bは、タッチパネル部12の情報入力機能を利用してHMD2の表示画面部22に表示させる部分視野領域の選択操作を行う機能を実現するものである。
情報入力部12cは、タッチパネル部12の情報入力機能を利用して眼鏡レンズ装用予定者P1が装用を予定する眼鏡レンズについてのパラメータ情報を入力する機能を実現するものである。
次に、上述した構成のシミュレーションシステムを用いて眼鏡レンズ装用予定者P1にレンズ装用状態の疑似体験をさせるために行うシミュレーション処理の手順について説明する。
ここでは、先ず、眼鏡店Sで行われるシミュレーション処理の概要について簡単に説明する。
図3は、本実施形態におけるシミュレーション処理の概要を示すフローチャートである。
一方、レンズ装用状態の疑似体験の結果がNGであると眼鏡レンズ装用予定者P1が判断した場合には(S105)、眼鏡レンズの処方等を変更した上で、疑似体験結果がOKとなるまで、再び上述した一連の手順を繰り返す(S101~S105)。
続いて、上述した手順のシミュレーション処理のうち、シミュレーション画像の生成から表示出力までの手順について、さらに詳しく説明する。
図4は、本実施形態におけるシミュレーション処理の特徴的な手順の詳細を示すフローチャートである。
さらに、サーバ装置3では、画像生成部35が各部分視野領域別のシミュレーション画像の生成を行うと、各部分視野領域についての解説情報36aを、制御部37が解説情報記憶部36から読み出す(S205)。
サーバ装置3から全視野領域の元画像についての画像データが送信されてくると、タブレット端末1は、その画像データを通信I/F部11で受信する。そして、タッチパネル部12の情報出力部12aが、その表示画面上の所定部分を利用して、送られてきた全視野領域分の元画像の表示出力を行う(S206)。このときの画像表示出力は、各部分視野領域の元画像をそのまま並べて表示するものであってもよいし、あるいは各部分視野領域間で重複画像部分が存在する場合に当該重複画像部分を重ねるように合成して表示するものであってもよい。
このような全視野領域分の元画像についての表示出力結果を目視することで、店員P2は、眼鏡レンズ装用予定者P1に視認させるべきシミュレーション画像の元画像に関する全体像を把握することが可能となる。なお、このときの全視野領域分の元画像の表示出力態様(表示出力される画面上の所定部分の位置等を含む)の詳細については後述する(例えば図5参照)。
このときの表示画面部22による画像表示出力は、全視野領域分についてのものではなく、小視野分のシミュレーション画像についてのものである。したがって、眼鏡レンズを通した全視野領域が例えば水平方向約90°、垂直方向約70°であるのに対して、表示画面部22が対角方向約50°程度の視野角に対応したものであっても、その表示画面部22は、シミュレーション画像の縮小等を要することなく、その画像表示出力を行うことができる。
以上のような手順のシミュレーション処理を行うことで、眼鏡店Sでは、店員P2による眼鏡レンズ装用予定者P1への眼鏡レンズについての説明を、シミュレーションシステムが補助することになる。すなわち、眼鏡店Sでは、HMD2を利用しつつ眼鏡レンズ装用予定者P1に対して各部分視野領域別に選択的にシミュレーション画像を視認させる画像表示ステップと、タブレット端末1を利用しつつHMD2で表示出力している部分視野領域に対応する解説情報36aを出力する情報出力ステップと、HMD2で表示する部分視野領域の選択を切り替えるとともにこれに対応してタブレット端末1で出力する解説情報36aの切り替えを行う選択切替ステップと、を順に経ることで、シミュレーションシステムが眼鏡店Sの店員P2による説明を補助するのである。
ここで、上述した手順のシミュレーション処理においてタブレット端末1のタッチパネル部12が表示出力を行う内容について、具体例を挙げて詳しく説明する。
図5は、本実施形態におけるタブレット端末での表示出力内容の具体例を示す概念図である。
続いて、上述した手順のシミュレーション処理においてHMD2の表示画面部22が表示出力を行う内容について、具体例を挙げて詳しく説明する。
図例の元画像は、眼鏡レンズの全視野領域を9つの部分視野領域(小視野)に区分けしたものである。この区分けにより、眼鏡レンズが累進屈折力レンズである場合の遠方視領域の右側部分、中央部分、左側部分、近方視領域の右側部分、中央部分、左側部分、中間視領域の右側部分、中央部分、左側部分は、それぞれが別の部分視野領域に属することになる。これらの各部分視野領域は、隣接する部分視野領域同士で重複する画像部分を有している。なお、近方視領域の中央部分についての元画像は、文字が記載された紙を眼鏡レンズ装用予定者P1が持ち上げて読むことを想定したものである。
図例のシミュレーション画像は、図6に示した部分視野領域別の元画像のそれぞれに対して、画像処理を行って得られたものである。画像処理では、例えば、右眼について、球面度数S2.00、乱視度数C-1.00、乱視軸Ax180、加入度Add2.50、累進帯長14mm、瞳孔間距離PD=32+32mmの累進屈折力レンズのレンズ視覚特性を反映させている。
図例の等高線画像は、図6に示した部分視野領域別の元画像の区分けに対応したものであり、眼鏡レンズの明瞭指数別に表示明度を相違させた画像である。同一表示明度の領域部分とこれに隣接する他の表示明度の領域部分との境界部分が明瞭指数の等高線に相当する。なお、眼鏡レンズは、図7の場合と同様、例えば、右眼について、球面度数S2.00、乱視度数C-1.00、乱視軸Ax180、加入度Add2.50、累進帯長14mm、瞳孔間距離PD=32+32mmの累進屈折力レンズを想定している。
図例の画像は、図7のシミュレーション画像に図8の等高線画像を重畳したものである。また、明瞭指数の値によって画像の輝度を調整している。HMD2の表示画面部22では、このような等高線画像が重畳されたシミュレーション画像が、各部分視野領域別に選択的に表示出力されることになる。
本実施形態によれば、以下のような効果が得られる。
したがって、本実施形態によれば、例えば個別設計の自由曲面を有した累進屈折力レンズのように、眼鏡レンズ毎のレンズ視覚特性の特徴が設計タイプ等によって数百種類にも及ぶ場合であっても、店員P2のスキル等に因らずにレンズ視覚特性の特徴の違いを眼鏡レンズ装用予定者P1に十分に把握させた上で、その眼鏡レンズ装用予定者P1に疑似体験の結果に対する適否を判断させることができるので、その眼鏡レンズ装用予定者P1に満足感を与えることが可能となる。
したがって、本実施形態によれば、表示画面部22で眼鏡レンズの全視野領域を一括して再現可能である必要はなく、小型軽量で安価なHMD2を用いて眼鏡レンズ装用予定者P1にレンズ装用状態の疑似体験をさせることができ、しかもその場合であっても表示領域切替によって全視野領域分を眼鏡レンズ装用予定者P1に明瞭に視認させることもできるので、シミュレーション装置5を導入する際に眼鏡店Sの側が感じてしまうであろう不満を解消することが可能となる。
したがって、本実施形態によれば、店員P2には高いスキル等を要求せず、眼鏡レンズ装用予定者P1に対しては眼鏡レンズのレンズ視覚特性の特徴を十分に把握させ得るようになる。つまり、眼鏡レンズ装用予定者P1にレンズ装用状態を疑似体験させるにあたり、その眼鏡レンズ装用予定者P1や眼鏡店Sの側等が感じるであろう不満を解消して、それぞれに対して満足感を与え得るようになる。
次に、本発明の第2実施形態を説明する。
ここでは、上述した第1実施形態との相違点について説明する。
ここで説明する第2実施形態では、表示出力するシミュレーション画像の内容が、上述した第1実施形態の場合とは相違する。
図10は、第2実施形態におけるシミュレーション処理の概要を示すフローチャートである。
第2実施形態では、眼鏡レンズ装用予定者P1が装用を予定する眼鏡レンズの処方情報や当該眼鏡レンズを保持する眼鏡フレームの形状情報等を含むパラメータ情報がタブレット端末1の情報入力部12cで入力されると(S301)、そのパラメータ情報が通信回線網4を介してサーバ装置3へ送信される。サーバ装置3では、パラメータ情報が送信されてくると、通信I/F部31で受信して取得情報認識部32で認識する。そして、取得情報認識部32は、パラメータ情報に含まれる眼鏡フレームの形状情報に基づいて、その眼鏡フレームの枠形状を特定するフレーム形状データを取得する(S302)。また、レンズ設計データ生成部33は、取得情報認識部32での処方情報等の認識結果に基づいて、眼鏡レンズ装用予定者P1が装用予定の眼鏡レンズ(すなわち決定した処方等に応じた眼鏡レンズ)のレンズ設計データを生成する(S303)。
それ以降の処理(S306~S308)は、第1実施形態の場合と同様である(図3参照)。
続いて、上述した手順のシミュレーション処理のうち、シミュレーション画像の生成から表示出力までの手順について、さらに詳しく説明する。
図11は、第2実施形態におけるシミュレーション処理の特徴的な手順の詳細を示すフローチャートである。
それ以降の処理(S408~S413)は、第1実施形態の場合と同様である(図4参照)。
続いて、上述した手順のシミュレーション処理においてHMD2の表示画面部22が表示出力を行う内容について、具体例を挙げて詳しく説明する。
なお、以下に説明する具体例において、シミュレーション画像生成の基になる元画像は、第1実施形態の場合と同様であるものとする(図6参照)。
図例のシミュレーション画像は、図6に示した部分視野領域別の元画像のそれぞれに対して、画像処理を行って得られたものである。画像処理では、例えば、右眼について、球面度数S2.00、乱視度数C-1.00、乱視軸Ax180、加入度Add2.50、累進帯長14mm、瞳孔間距離PD=32+32mmの累進屈折力レンズのレンズ視覚特性を反映させている。
さらに、図例のシミュレーション画像は、各部分視野領域別に、眼鏡フレームの枠に相当するフレーム枠画像50が反映されている。つまり、シミュレーション画像には、フレーム枠画像50が重ね合わせられている。このような表示出力内容によれば、これを見た眼鏡レンズ装用予定者P1は、眼鏡フレームを装用した状態(すなわちフレーム枠が視界に入る状態)での眼鏡レンズのレンズ視覚特性(フレーム枠の内側における視覚特性)を、容易かつ的確に把握し得るようになる。
図例の等高線画像は、図6に示した部分視野領域別の元画像の区分けに対応したものであり、眼鏡レンズの明瞭指数別に表示明度を相違させた画像である。同一表示明度の領域部分とこれに隣接する他の表示明度の領域部分との境界部分が明瞭指数の等高線に相当する。なお、眼鏡レンズは、図12の場合と同様、例えば、右眼について、球面度数S2.00、乱視度数C-1.00、乱視軸Ax180、加入度Add2.50、累進帯長14mm、瞳孔間距離PD=32+32mmの累進屈折力レンズを想定している。
さらに、図例の等高線画像には、各部分視野領域別に、眼鏡フレームの枠に相当するフレーム枠画像50が反映されている。第2実施形態においては、例えば元画像に画像処理を行って得られたシミュレーション画像を構成する画素成分を除去することで、等高線画像にフレーム枠画像50を反映させた画像を作成することも実現可能である。このような画像による表示出力内容によれば、これを見た眼鏡レンズ装用予定者P1は、眼鏡フレームの枠位置に対する眼鏡レンズの明瞭指数の分布を容易かつ的確に把握し得るようになる。
図例の画像は、図12のシミュレーション画像に図13の等高線画像を重畳したものであり、さらには眼鏡フレームの枠に相当するフレーム枠画像50を反映させたものである。また、図例の画像は、明瞭指数の値によって画像の輝度を調整している。HMD2の表示画面部22には、このような等高線画像が重畳されフレーム枠画像50が反映されたシミュレーション画像が、各部分視野領域別に選択的に表示出力されることになる。
このような表示出力内容によれば、これを見た眼鏡レンズ装用予定者P1は、眼鏡フレームを装用した状態での眼鏡レンズのレンズ視覚特性を容易かつ的確に把握し得るとともに、眼鏡フレームの枠位置に対する眼鏡レンズの明瞭指数の分布を容易かつ的確に把握し得るようになる。
第2実施形態によれば、上述した第1実施形態の場合に得られた効果に加えて、以下のような効果が得られる。
以上に本発明の第1実施形態および第2実施形態を説明したが、上記の開示内容は、本発明の例示的な実施形態を示すものである。すなわち、本発明の技術的範囲は、上記の例示的な実施形態に限定されるものではない。
さらに、各実施形態では、原画像記憶部34が記憶保持する原画像データ34aの段階で、部分視野領域(小視野)への区分けがされている場合を例に挙げたが、遅くとも画像処理部35aによるシミュレーション画像生成の段階までに部分視野領域(小視野)への区分けがされていればよい。したがって、例えば全視野領域の元画像について原画像記憶部34が原画像データ34aを記憶保持している場合であれば、その全視野領域から部分視野領域の元画像を抽出しつつ、その部分視野領域についてのシミュレーション画像を生成することも考えられる。
Claims (12)
- 眼鏡店で用いられる端末装置と、前記眼鏡店に訪れた眼鏡レンズ装用予定者が視認するディスプレイ装置と、コンピュータとしての機能を有するサーバ装置とが、通信可能に接続されたシミュレーションシステムであって、
前記サーバ装置は、
前記眼鏡レンズ装用予定者が装用を予定する眼鏡レンズのレンズ設計データに基づき、当該眼鏡レンズの全視野領域を構成する複数の部分視野領域についての元画像に対して、当該眼鏡レンズのレンズ視覚特性を反映させる画像処理を行い、前記複数の部分視野領域別のシミュレーション画像を生成する画像生成部と、
前記複数の部分視野領域のそれぞれにおける前記レンズ視覚特性の特徴に関する解説情報を、当該部分視野領域と対応付けて、前記レンズ設計データに適用されるレンズ設計基準の種類別に記憶する情報記憶部と、を備え、
前記ディスプレイ装置は、前記シミュレーション画像を前記複数の部分視野領域別に選択的に表示して前記眼鏡レンズ装用予定者に視認させる表示画面部を備え、
前記端末装置は、前記ディスプレイ装置の前記表示画面部が表示する部分視野領域に対応する前記解説情報を前記情報記憶部から取得して出力する情報出力部を備える
ことを特徴とするシミュレーションシステム。 - 前記サーバ装置の前記画像生成部は、前記眼鏡レンズの明瞭指数の等高線の前記シミュレーション画像への重畳処理を行うものであり、
前記ディスプレイ装置の前記表示画面部は、前記等高線が重畳された前記シミュレーション画像を表示するものである
ことを特徴とする請求項1記載のシミュレーションシステム。 - 前記サーバ装置の前記画像生成部は、前記眼鏡レンズを保持する眼鏡フレームの枠画像の前記シミュレーション画像への反映処理を行うものであり、
前記ディスプレイ装置の前記表示画面部は、前記枠画像が反映された前記シミュレーション画像を表示するものである
ことを特徴とする請求項1または2記載のシミュレーションシステム。 - 前記ディスプレイ装置は、前記眼鏡レンズ装用予定者の頭部に装着されるヘッドマウントディスプレイであり、
前記表示画面部は、前記眼鏡レンズ装用予定者の左右眼のそれぞれに対して個別に画像表示を行う
ことを特徴とする請求項1、2または3記載のシミュレーションシステム。 - 前記端末装置は、前記眼鏡店の店員が使用する携帯型の情報端末であり、
前記情報出力部は、前記解説情報を前記店員に対して表示出力する
ことを特徴とする請求項1から4のいずれか1項に記載のシミュレーションシステム。 - 前記情報出力部は、前記解説情報を音声出力する
ことを特徴とする請求項1から5のいずれか1項に記載のシミュレーションシステム。 - 前記ディスプレイ装置または前記端末装置の少なくとも一方には、前記ディスプレイ装置に表示させる部分視野領域の選択操作を行うための操作部が設けられている
ことを特徴とする請求項1から6のいずれか1項に記載のシミュレーションシステム。 - 前記端末装置は、前記眼鏡レンズ装用予定者が装用を予定する眼鏡レンズについてのパラメータ情報を入力する情報入力部を備え、
前記サーバ装置は、前記情報入力部で入力された前記パラメータ情報に基づき、前記眼鏡レンズ装用予定者が装用を予定する眼鏡レンズに適用すべきレンズ設計基準の種類を特定し、その特定した種類のレンズ設計基準を適用しつつ当該眼鏡レンズのレンズ設計データを生成するデータ生成部を備える
ことを特徴とする請求項1から7のいずれか1項に記載のシミュレーションシステム。 - 眼鏡店で用いられる端末装置と、前記眼鏡店に訪れた眼鏡レンズ装用予定者が視認するディスプレイ装置とが、通信可能に接続されて構成されたシミュレーション装置であって、
前記ディスプレイ装置は、眼鏡レンズの全視野領域を構成する複数の部分視野領域についての元画像に対して当該眼鏡レンズのレンズ視覚特性を反映させる画像処理を行って得られたシミュレーション画像を、前記複数の部分視野領域別に選択的に表示して、前記眼鏡レンズ装用予定者に視認させる表示画面部を備え、
前記端末装置は、前記ディスプレイ装置の前記表示画面部が表示する部分視野領域に反映された前記レンズ視覚特性の特徴に関する解説情報を出力する情報出力部を備える
ことを特徴とするシミュレーション装置。 - 前記端末装置または前記ディスプレイ装置の少なくとも一方は、通信回線網に接続し、当該通信回線網上のサーバ装置との通信を行う通信インターフェイス部を備え、前記通信インターフェイス部を介して少なくとも前記部分視野領域別の前記シミュレーション画像および前記解説情報を前記サーバ装置から取得するように構成されている
ことを特徴とする請求項9記載のシミュレーション装置。 - 前記端末装置または前記ディスプレイ装置の少なくとも一方は、前記部分視野領域別の前記シミュレーション画像および前記解説情報を記憶する情報記憶部を備える
ことを特徴とする請求項9記載のシミュレーション装置。 - 眼鏡店で用いられる端末装置と、前記眼鏡店に訪れた眼鏡レンズ装用予定者が視認するディスプレイ装置とを利用しつつ、前記眼鏡店での商品説明の際に当該商品説明を補助する商品説明補助方法であって、
眼鏡レンズの全視野領域を構成する複数の部分視野領域についての元画像に対して当該眼鏡レンズのレンズ視覚特性を反映させる画像処理を行って得られたシミュレーション画像を、前記複数の部分視野領域別に選択的に前記ディスプレイ装置で表示して、前記眼鏡レンズ装用予定者に視認させる画像表示ステップと、
前記ディスプレイ装置で表示する部分視野領域に反映された前記レンズ視覚特性の特徴に関する解説情報を、当該部分視野領域と当該解説情報とが対応付けられた状態で予め記憶している情報記憶部から取得して、前記端末装置で出力する情報出力ステップと、
前記ディスプレイ装置で表示する部分視野領域の選択を切り替えるとともに、これに対応して前記端末装置で出力する解説情報の切り替えを行う選択切替ステップと、
を備えることを特徴とする商品説明補助方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/766,277 US10032297B2 (en) | 2013-02-06 | 2013-11-13 | Simulation system, simulation device, and product explanation assistance method |
JP2014560642A JP6088549B2 (ja) | 2013-02-06 | 2013-11-13 | シミュレーションシステムおよびシミュレーション装置 |
EP13874426.3A EP2958035A4 (en) | 2013-02-06 | 2013-11-13 | SIMULATION SYSTEM, SIMULATION DEVICE, AND METHOD OF ASSISTING PRODUCT DESCRIPTION |
CN201380072146.0A CN105009124B (zh) | 2013-02-06 | 2013-11-13 | 仿真系统、仿真装置以及商品说明辅助方法 |
KR1020157023939A KR101748976B1 (ko) | 2013-02-06 | 2013-11-13 | 시뮬레이션 시스템, 시뮬레이션 장치 및 상품 설명 보조 방법 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013021112 | 2013-02-06 | ||
JP2013-021112 | 2013-02-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014122834A1 true WO2014122834A1 (ja) | 2014-08-14 |
Family
ID=51299441
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/080632 WO2014122834A1 (ja) | 2013-02-06 | 2013-11-13 | シミュレーションシステム、シミュレーション装置および商品説明補助方法 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10032297B2 (ja) |
EP (1) | EP2958035A4 (ja) |
JP (1) | JP6088549B2 (ja) |
KR (1) | KR101748976B1 (ja) |
CN (1) | CN105009124B (ja) |
WO (1) | WO2014122834A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3189372A1 (de) * | 2014-09-22 | 2017-07-12 | Carl Zeiss Vision International GmbH | Anzeigevorrichtung zur demonstration optischer eigenschaften von brillengläsern |
WO2019009034A1 (ja) * | 2017-07-03 | 2019-01-10 | 株式会社ニコン・エシロール | 眼鏡レンズの設計方法、眼鏡レンズの製造方法、眼鏡レンズ、眼鏡レンズ発注装置、眼鏡レンズ受注装置および眼鏡レンズ受発注システム |
JP2021149031A (ja) * | 2020-03-23 | 2021-09-27 | ホヤ レンズ タイランド リミテッドHOYA Lens Thailand Ltd | 仮想画像生成装置及び仮想画像生成方法 |
WO2022138073A1 (ja) * | 2020-12-21 | 2022-06-30 | 株式会社ニコン・エシロール | 画像生成装置、頭部装着表示装置、画像生成方法、及びプログラム |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3276327B1 (en) | 2016-07-29 | 2019-10-02 | Essilor International | Method for virtual testing of at least one lens having a predetermined optical feature and associated device |
WO2018163166A2 (en) * | 2017-03-05 | 2018-09-13 | Virtuoptica Ltd. | Eye examination method and apparatus therefor |
WO2019082366A1 (ja) * | 2017-10-26 | 2019-05-02 | サン電子株式会社 | 会議システム |
US11175518B2 (en) | 2018-05-20 | 2021-11-16 | Neurolens, Inc. | Head-mounted progressive lens simulator |
US10783700B2 (en) | 2018-05-20 | 2020-09-22 | Neurolens, Inc. | Progressive lens simulator with an axial power-distance simulator |
US11559197B2 (en) | 2019-03-06 | 2023-01-24 | Neurolens, Inc. | Method of operating a progressive lens simulator with an axial power-distance simulator |
US11202563B2 (en) | 2019-03-07 | 2021-12-21 | Neurolens, Inc. | Guided lens design exploration system for a progressive lens simulator |
US11259697B2 (en) | 2019-03-07 | 2022-03-01 | Neurolens, Inc. | Guided lens design exploration method for a progressive lens simulator |
US11259699B2 (en) | 2019-03-07 | 2022-03-01 | Neurolens, Inc. | Integrated progressive lens simulator |
US11288416B2 (en) | 2019-03-07 | 2022-03-29 | Neurolens, Inc. | Deep learning method for a progressive lens simulator with an artificial intelligence engine |
US11241151B2 (en) | 2019-03-07 | 2022-02-08 | Neurolens, Inc. | Central supervision station system for Progressive Lens Simulators |
JP6997129B2 (ja) * | 2019-03-28 | 2022-01-17 | ファナック株式会社 | 制御システム |
CN111093027B (zh) | 2019-12-31 | 2021-04-13 | 联想(北京)有限公司 | 一种显示方法及电子设备 |
EP4106984A4 (en) | 2020-02-21 | 2024-03-20 | Ditto Technologies, Inc. | EYEWEAR FRAME CONNECTION INCLUDING LIVE CONNECTION |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3919097B2 (ja) | 2001-09-06 | 2007-05-23 | Hoya株式会社 | 眼鏡レンズの両眼視性能表示方法及びその装置 |
JP2008250441A (ja) * | 2007-03-29 | 2008-10-16 | Nikon-Essilor Co Ltd | 眼鏡レンズ受発注システム |
WO2009133887A1 (ja) | 2008-04-28 | 2009-11-05 | Hoya株式会社 | レンズ設計基準の選択方法 |
WO2010044383A1 (ja) | 2008-10-17 | 2010-04-22 | Hoya株式会社 | 眼鏡の視野画像表示装置及び眼鏡の視野画像表示方法 |
JP2012066002A (ja) * | 2010-09-27 | 2012-04-05 | Hoya Corp | 眼鏡の視野画像表示装置 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1291633B1 (en) | 2001-09-06 | 2005-09-28 | Hoya Corporation | Method for evaluating binocular performance of spectacle lenses, method for displaying said performance and apparatus therefore |
EP1949174B1 (en) * | 2005-11-15 | 2019-10-16 | Carl Zeiss Vision Australia Holdings Ltd. | Ophthalmic lens simulation system and method |
JP5073521B2 (ja) | 2007-09-28 | 2012-11-14 | 株式会社ニデック | 検眼装置 |
JP4609581B2 (ja) * | 2008-03-26 | 2011-01-12 | セイコーエプソン株式会社 | シミュレーション装置、シミュレーションプログラムおよびシミュレーションプログラムを記録した記録媒体 |
JP5341462B2 (ja) * | 2008-10-14 | 2013-11-13 | キヤノン株式会社 | 収差補正方法、画像処理装置および画像処理システム |
EP2184005B1 (en) | 2008-10-22 | 2011-05-18 | SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH | Method and apparatus for image processing for computer-aided eye surgery |
US8583406B2 (en) | 2008-11-06 | 2013-11-12 | Hoya Lens Manufacturing Philippines Inc. | Visual simulator for spectacle lens, visual simulation method for spectacle lens, and computer readable recording medium recording computer readable visual simulation program for spectacle lens |
US20110213664A1 (en) * | 2010-02-28 | 2011-09-01 | Osterhout Group, Inc. | Local advertising content on an interactive head-mounted eyepiece |
US20130278631A1 (en) * | 2010-02-28 | 2013-10-24 | Osterhout Group, Inc. | 3d positioning of augmented reality information |
CA2820241C (en) * | 2012-06-13 | 2020-01-14 | Robert G. Hilkes | An apparatus and method for enhancing human visual performance in a head worn video system |
-
2013
- 2013-11-13 JP JP2014560642A patent/JP6088549B2/ja not_active Expired - Fee Related
- 2013-11-13 CN CN201380072146.0A patent/CN105009124B/zh not_active Expired - Fee Related
- 2013-11-13 EP EP13874426.3A patent/EP2958035A4/en not_active Withdrawn
- 2013-11-13 US US14/766,277 patent/US10032297B2/en not_active Expired - Fee Related
- 2013-11-13 KR KR1020157023939A patent/KR101748976B1/ko active IP Right Grant
- 2013-11-13 WO PCT/JP2013/080632 patent/WO2014122834A1/ja active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3919097B2 (ja) | 2001-09-06 | 2007-05-23 | Hoya株式会社 | 眼鏡レンズの両眼視性能表示方法及びその装置 |
JP2008250441A (ja) * | 2007-03-29 | 2008-10-16 | Nikon-Essilor Co Ltd | 眼鏡レンズ受発注システム |
WO2009133887A1 (ja) | 2008-04-28 | 2009-11-05 | Hoya株式会社 | レンズ設計基準の選択方法 |
WO2010044383A1 (ja) | 2008-10-17 | 2010-04-22 | Hoya株式会社 | 眼鏡の視野画像表示装置及び眼鏡の視野画像表示方法 |
JP2012066002A (ja) * | 2010-09-27 | 2012-04-05 | Hoya Corp | 眼鏡の視野画像表示装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2958035A4 |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3189372A1 (de) * | 2014-09-22 | 2017-07-12 | Carl Zeiss Vision International GmbH | Anzeigevorrichtung zur demonstration optischer eigenschaften von brillengläsern |
WO2019009034A1 (ja) * | 2017-07-03 | 2019-01-10 | 株式会社ニコン・エシロール | 眼鏡レンズの設計方法、眼鏡レンズの製造方法、眼鏡レンズ、眼鏡レンズ発注装置、眼鏡レンズ受注装置および眼鏡レンズ受発注システム |
JPWO2019009034A1 (ja) * | 2017-07-03 | 2020-05-21 | 株式会社ニコン・エシロール | 眼鏡レンズの設計方法、眼鏡レンズの製造方法、眼鏡レンズ、眼鏡レンズ発注装置、眼鏡レンズ受注装置および眼鏡レンズ受発注システム |
JP7252892B2 (ja) | 2017-07-03 | 2023-04-05 | 株式会社ニコン・エシロール | 眼鏡レンズの設計方法、および眼鏡レンズの製造方法 |
US11754856B2 (en) | 2017-07-03 | 2023-09-12 | Nikon-Essilor Co., Ltd. | Method for designing eyeglass lens, method for manufacturing eyeglass lens, eyeglass lens, eyeglass lens ordering device, eyeglass lens order receiving device, and eyeglass lens ordering and order receiving system |
JP2021149031A (ja) * | 2020-03-23 | 2021-09-27 | ホヤ レンズ タイランド リミテッドHOYA Lens Thailand Ltd | 仮想画像生成装置及び仮想画像生成方法 |
WO2021193261A1 (ja) * | 2020-03-23 | 2021-09-30 | ホヤ レンズ タイランド リミテッド | 仮想画像生成装置及び仮想画像生成方法 |
JP7272985B2 (ja) | 2020-03-23 | 2023-05-12 | ホヤ レンズ タイランド リミテッド | 仮想画像生成装置及び仮想画像生成方法 |
WO2022138073A1 (ja) * | 2020-12-21 | 2022-06-30 | 株式会社ニコン・エシロール | 画像生成装置、頭部装着表示装置、画像生成方法、及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
CN105009124A (zh) | 2015-10-28 |
KR20150114995A (ko) | 2015-10-13 |
JPWO2014122834A1 (ja) | 2017-01-26 |
CN105009124B (zh) | 2018-01-19 |
KR101748976B1 (ko) | 2017-06-19 |
JP6088549B2 (ja) | 2017-03-01 |
EP2958035A4 (en) | 2016-10-19 |
US20150371415A1 (en) | 2015-12-24 |
EP2958035A1 (en) | 2015-12-23 |
US10032297B2 (en) | 2018-07-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6088549B2 (ja) | シミュレーションシステムおよびシミュレーション装置 | |
JP7506126B2 (ja) | 仮想現実、拡張現実、および複合現実システムおよび方法 | |
CN101356468B (zh) | 眼镜片模拟系统和方法 | |
JP6023801B2 (ja) | シミュレーション装置 | |
JP6014038B2 (ja) | 眼鏡装用シミュレーション方法、プログラム、装置、眼鏡レンズ発注システム及び眼鏡レンズの製造方法 | |
JP6276691B2 (ja) | シミュレーション装置、シミュレーションシステム、シミュレーション方法及びシミュレーションプログラム | |
JP5098739B2 (ja) | シミュレーション装置、シミュレーションプログラムおよびシミュレーションプログラムを記録した記録媒体 | |
CN102592484A (zh) | 用于基于虚拟现实的训练模拟器的可重配置平台管理设备 | |
US20160363763A1 (en) | Human factor-based wearable display apparatus | |
WO2010044383A1 (ja) | 眼鏡の視野画像表示装置及び眼鏡の視野画像表示方法 | |
JP5632245B2 (ja) | 眼鏡の視野画像表示装置 | |
JP3897058B2 (ja) | 眼鏡の疑似体験装置、眼鏡の疑似体験方法及び記録媒体 | |
JPH11183856A (ja) | 眼鏡の視野体験装置、視野体験方法および記録媒体 | |
JPH11119172A (ja) | 眼鏡の視野体験装置、視野体験方法および記録媒体 | |
WO2017051915A1 (ja) | 視覚シミュレーション装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13874426 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014560642 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14766277 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013874426 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 20157023939 Country of ref document: KR Kind code of ref document: A |