WO2019088273A1 - Image processing device, image processing method and image processing program - Google Patents

Image processing device, image processing method and image processing program Download PDF

Info

Publication number
WO2019088273A1
WO2019088273A1 PCT/JP2018/040919 JP2018040919W WO2019088273A1 WO 2019088273 A1 WO2019088273 A1 WO 2019088273A1 JP 2018040919 W JP2018040919 W JP 2018040919W WO 2019088273 A1 WO2019088273 A1 WO 2019088273A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image processing
background image
layer
real estate
Prior art date
Application number
PCT/JP2018/040919
Other languages
French (fr)
Japanese (ja)
Inventor
英起 多田
亮介 稲森
Original Assignee
ナーブ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ナーブ株式会社 filed Critical ナーブ株式会社
Priority to JP2019523897A priority Critical patent/JP6570161B1/en
Publication of WO2019088273A1 publication Critical patent/WO2019088273A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to an image processing apparatus, an image processing method, and an image processing program for combining an image of an object with a background image obtained by photographing an object.
  • Home staging is a service that performs interior coordination on real estate properties in order to promote the sale of real estate properties such as single-family homes and condominiums. Specifically, the furniture, furniture, accessories, etc. selected in consideration of the target layer of the purchaser, fashion, characteristics of the property, etc. are placed. Purchasers (including those under consideration) are said to be able to facilitate sales smoothly because it becomes easy to imagine the state of living in the property by previewing the property to which home staging is applied.
  • VR virtual reality
  • Image display of a VR is usually performed on a glasses or goggle type display device called a head mounted display (hereinafter also referred to as "HMD").
  • the HMD incorporates a screen on which two images provided with parallax are displayed and two lenses respectively installed in the direction of the eyes of the user's eyes, and the user can By looking at the screen, it is possible to three-dimensionally recognize the image.
  • a gyro sensor and an acceleration sensor are built in the HMD, and an image displayed on the screen changes in accordance with the movement of the head of the user detected by these sensors. As a result, the user can experience as if it were embedded in a three-dimensional image.
  • Patent Document 1 As a technology relating to the simulation of the interior of a real estate property, Patent Document 1 generates a 3D model of a real estate property from property information including a floor plan image, an indoor shot photograph, etc., and uses virtual reality based on the generated 3D model. A technique for browsing indoors, simulating interiors, etc. is disclosed (see paragraph 0015 of Patent Document 1). Specifically, the scale of the 3D furniture model prepared in advance is adjusted to be placed in the 3D model of the property (see paragraph 0070 of Patent Document 1).
  • Patent Document 2 discloses the periphery for generating three-dimensional shape data of the surrounding environment from environment map data consisting of two or more viewpoints.
  • An image processing apparatus is disclosed which comprises: environmental three-dimensional shape data generation means; and virtual subject combination means for combining a virtual subject with background image data by using ambient three-dimensional shape data as a light source.
  • a foreground image, a background image serving as the background of the foreground image, storage means for storing depth information indicating the depth of the background image, and the background image on the foreground image are stored.
  • the distance between the light source and the space coordinates of the synthesis position from the light source parameters and space coordinates of the synthesis position holding the information of the position and illuminance of the light source viewed from the position where the background image was taken
  • an image processing apparatus comprising: lighting means for applying a lighting effect to a foreground image using parameters
  • non-patent property 1 has a 360 ° image (sky image) by photographing a real property using a digital camera (product name: THETA) manufactured by Ricoh Co., Ltd., for VR
  • the service which is processed into the contents of and provided to the real estate agent is disclosed.
  • Patent No. 6116746 gazette JP, 2013-152683, A JP, 2013-149219, A
  • the present invention has been made in view of the above, and is an image processing apparatus and an image processing method capable of freely synthesizing a three-dimensional model of an object such as furniture with a two-dimensional background image freely. And providing an image processing program.
  • an image processing apparatus is an image processing apparatus that combines an image of an object virtually arranged in the real estate property with a background image on which the real estate property is captured.
  • a three-dimensional graphics space surrounded by a celestial sphere image acquired as a background image by capturing an arrangement position of the object on the floor plan of the real estate property and photographing the real estate property.
  • a 3D position estimation unit for estimating an object position which is a position corresponding to an arrangement position of the object on the floor plan, and a shadow for generating a shadow caused by arranging a three-dimensional model of the object at the object position.
  • a generation unit, and the object position with the center of the three-dimensional graphics space as a projection center An object layer generation unit that generates a first layer by rendering an image obtained by projecting the arranged three-dimensional model onto the same celestial sphere as the background image; and a projection center of the three-dimensional graphics space
  • a shadow layer generation unit that generates a second layer by rendering an image obtained by projecting the shadow onto the sky sphere, and combining the second layer and the first layer with the background image
  • an image combining unit is
  • the image processing apparatus further includes a display unit that displays the image of the floor plan, and an operation input unit that receives an operation performed by the user, and the arrangement setting unit receives the operation input unit.
  • the arrangement location may be set based on an operation on the image of the floor plan displayed on the display unit.
  • the shadow generation unit may set a light source based on the background image, and may generate a shadow of the three-dimensional model based on the light source.
  • the 3D position estimation unit may estimate the object position based on the dimensions of each unit in the floor plan.
  • the 3D position estimation unit may estimate the object position based on a distance and a dimension of a subject located in the vicinity of the arrangement location.
  • the 3D position estimation unit may acquire a shooting point based on a distance and a dimension of a subject measured from the background image, and estimate the object position based on the shooting point.
  • the 3D position estimation unit may acquire a shooting point based on the measured distance and dimension of the subject, and estimate the object position based on the shooting point.
  • An image processing method is an image processing method for synthesizing an image of an object virtually arranged in the real estate property with respect to a background image showing the real estate property.
  • the object on the floor plan Estimating an object position which is a position corresponding to an arrangement position of the object, generating a shadow generated by arranging the three-dimensional model of the object at the object position, and projecting the center of the three-dimensional graphics space
  • the object position is a position corresponding to an arrangement position of the object
  • the first layer by rendering an image projected on the same celestial sphere as the background image
  • projecting the shadow on the celestial sphere with the center of the three-dimensional graphics space as the projection center
  • An image processing program is an image processing program for synthesizing an image of an object virtually placed on the real estate property against a background image showing the real estate property, and the real estate property
  • the three-dimensional graphics space surrounded by the celestial sphere image acquired as a background image by setting the arrangement location of the object on the floor plan, and capturing the real estate property
  • the three-dimensional graphics space on the floor plan Estimating an object position which is a position corresponding to an arrangement position of the object, generating a shadow generated by arranging the three-dimensional model of the object at the object position, and determining a center of the three-dimensional graphics space.
  • the front located at the object position as the projection center Generating a first layer by rendering an image in which a three-dimensional model is projected onto the same celestial sphere as the background image, and setting the shadow as the projection center with the center of the three-dimensional graphics space as the projection center
  • Causing a computer to execute the steps of generating a second layer by rendering an image projected on a spherical surface, and combining the second layer and the first layer with the background image. is there.
  • an object position in the three-dimensional graphics space corresponding to the arrangement position of the object on the floor plan is estimated, and an image of the three-dimensional model when the three-dimensional model of the object is arranged at the object position Since the layer which rendered the image of the shadow of the three-dimensional model respectively is generated and these layers are combined with the background image, the three-dimensional model of the object such as furniture can be freely and uncomplicatedly with the two-dimensional background image. It becomes possible to synthesize.
  • FIG. 1 is a block diagram showing a schematic configuration of an image processing apparatus according to a first embodiment of the present invention. It is a schematic diagram for demonstrating the image compositing process which the image processing apparatus shown in FIG. 1 performs. It is a schematic diagram for demonstrating the image compositing process which the image processing apparatus shown in FIG. 1 performs. It is a flowchart which shows the image processing method which concerns on the 1st Embodiment of this invention. It is a floor plan of real estate that is an example of the subject of the background image It is a flowchart which shows the position estimation process shown in FIG. It is a schematic diagram for demonstrating the position estimation process of a 3D model. It is a schematic diagram for demonstrating the position estimation process of a 3D model.
  • FIG. 1 is a network diagram showing a configuration example of a system to which an image processing apparatus according to first to third embodiments of the present invention is applied.
  • FIG. 15 is a schematic view showing another configuration example of a system to which the image processing apparatus according to the first to third embodiments of the present invention is applied.
  • the image processing apparatus uses a celestial sphere image with real estate property as a background image, and synthesizes an image of a three-dimensional model (3D model) of an object such as furniture or furniture with this background image.
  • 3D model three-dimensional model
  • the celestial sphere image used as a background image is also referred to as a 360 ° image, a full celestial sphere image, an omnidirectional image, etc., and is an image in which an object around 360 ° around the shooting location is captured.
  • the background image (sky image) may be a still image or a moving image.
  • a virtually transparent floor is disposed on a background image, and the shadow of the 3D model is projected on the floor to enhance the realism of the 3D model. Also, by lighting the 3D model with the background image as the light source, the discomfort is reduced.
  • the present image processing apparatus since the data of the 3D model of the object is rendered in accordance with the projection of the two-dimensional background image, there is no gap in the depth, and it is possible to display without discomfort.
  • FIG. 1 is a block diagram showing a schematic configuration of an image processing apparatus according to a first embodiment of the present invention.
  • FIGS. 2A and 2B are schematic views for explaining the image combining process performed by the image processing apparatus shown in FIG. Among these, FIG. 2A shows a floor plan of real estate. Also, FIG. 2B is an image obtained by combining a 3D model of a sofa, a coffee table, a closet, and a picture with a background image (heaven ball image) acquired by shooting in the living room (see FIG. 2A) of the real estate property. Show a part. As shown in FIG.
  • the image processing apparatus 10 includes a communication interface 11, a display unit 12, an operation input unit 13, a storage unit 14, and a processor 15.
  • a general purpose computer can be used as such an image processing apparatus 10.
  • the communication interface 11 connects the image processing apparatus 10 to a communication network, and transmits and receives information to and from other devices connected to the communication network.
  • the communication interface 11 is configured using, for example, a soft modem, a cable modem, a wireless modem, an ADSL modem, an ONU (Optical Network Unit), or the like.
  • the communication interface 11 also functions as a data acquisition unit that loads image data generated by an omnidirectional camera, data of a 3D model, and the like into the image processing apparatus 10.
  • the display unit 12 is a display including a display panel formed by liquid crystal or organic EL (electroluminescence) and a drive unit.
  • the operation input unit 13 is an operation button, a keyboard, a pointing device such as a mouse, and an input device such as a touch sensor provided on the display unit 12.
  • the operation input unit 13 receives an operation performed by the user and processes a signal corresponding to the operation. Enter 15
  • the storage unit 14 is a computer-readable storage medium such as a semiconductor memory such as a ROM or a RAM.
  • the storage unit 14 includes a program storage unit 141, a background image data storage unit 142, a floor plan data storage unit 143, a 3D data storage unit 144, and a composite image data storage unit 145.
  • the operating system program and driver A program, an application program for executing various functions, various parameters used during execution of these programs, image data, etc. are stored.
  • the program storage unit 141 stores various programs such as an image processing program.
  • the background image data storage unit 142 stores image data (hereinafter also referred to as background image data) of a celestial sphere image acquired as a background image by photographing a real estate property.
  • the floor plan data storage unit 143 stores image data of a floor plan of a real estate property (hereinafter also referred to as floor plan data).
  • the 3D data storage unit 144 includes data (hereinafter, also referred to as 3D data) representing a 3D model of an object such as furniture virtually disposed on a real estate, and related parameters (for example, parameters representing a texture, etc.)
  • 3D data may be generated by the image processing apparatus 10 or may be acquired by another apparatus through the communication network N.
  • the composite image data storage unit 145 stores image data (hereinafter also referred to as composite image data) of a composite image obtained by combining an image of a 3D model with a background image.
  • the processor 15 is configured using, for example, a central processing unit (CPU) or a graphics processing unit (GPU), and reads various programs stored in the program storage unit 141 to centrally control the respective units of the image processing apparatus 10. At the same time, various processes are performed to combine an image of an object such as furniture with a background image showing a real estate property.
  • the functions of the image processing unit 150 realized by executing the image processing program by the processor 15 include the arrangement setting unit 151, the 3D position estimation unit 152, the object layer generation unit 153, and the shadow generation unit A shadow layer generation unit 155 and an image combining unit 156 are included.
  • the arrangement setting unit 151 sets an arrangement place in the floor plan of the object virtually arranged on the real estate according to the signal input from the operation input unit 13.
  • the 3D position estimation unit 152 is an arrangement location of an object in a floor plan in a three-dimensional graphics space (hereinafter, also referred to as a 3D space) in which a shooting point is surrounded by a sky spherical background image and is associated with a shooting point.
  • the position corresponding to (hereinafter, also referred to as an object position) is estimated.
  • the object layer generation unit 153 creates an image of the 3D model viewed from the center of the 3D space on a layer provided on the same celestial sphere as the background image. Render. That is, an image in which the 3D model is projected onto the celestial sphere is generated with the center of the 3D space as the projection center.
  • the layer on which the 3D model image is rendered is referred to as an object layer (first layer).
  • the shadow generation unit 154 generates a shadow generated by the 3D model placed at the object position in 3D space.
  • the shadow generation unit 154 executes an image based lighting process of setting a light source based on a background image, and generates a shadow of a 3D model based on the set light source.
  • Image-based lighting is a method of rendering a scene using an image obtained by photographing an existing subject as color information of lighting.
  • the shadow layer generation unit 155 renders an image obtained by viewing the shadow generated by the shadow generation unit 154 from the center of the 3D space on a layer provided on the same celestial sphere as the background image. That is, an image is generated in which the shadow of the 3D model is projected onto the sky sphere with the center of the 3D space as the projection center.
  • the layer in which the shadow image is rendered is referred to as a shadow layer (second layer).
  • the image combining unit 156 generates a combined image in which the 3D model image of the object is combined with the background image by superimposing the shadow layer and the object layer on the background image in this order.
  • the outline of the image processing in this embodiment is as follows. That is, a background image (sky image) to be composited, a layer of transparent sky image for shadow (shadow layer), and a layer of transparent sky image for 3D model of object such as furniture (object layer) Prepare a total of three layers of celestial sphere images.
  • the background image is acquired by photographing at the site of a real estate property using an omnidirectional camera or the like.
  • layers to be superimposed on the background image may be appropriately increased.
  • the 3D model of the object is placed in a 3D graphics space (3D space) surrounded by these celestial sphere images. Then, in order to cast a shadow on the 3D model, a light source based on a background image is set by image-based rating.
  • a camera for rendering the celestial sphere image is placed at the origin (center of the celestial sphere) of the 3D space.
  • the viewpoint (projection center) to see the 3D model and its shadow from the center of the 3D space is set.
  • an image of a shadow generated by a light source set based on a background image is rendered and pasted on a shadow layer.
  • an image of the 3D model when the 3D model is arranged in the 3D space is rendered and attached to an object layer.
  • FIG. 3 is a flowchart showing an image processing method according to the present embodiment.
  • FIG. 4 is a floor plan of real estate, which is an example of the subject of the background image.
  • an omnidirectional camera is installed at a position corresponding to the photographing point P 0 on the floor plan shown in FIG. It is assumed that this celestial sphere image is used as a background image to synthesize a 3D model.
  • the image processing unit 150 acquires a background image for combining a 3D model.
  • the image processing unit 150 acquires a background image by reading out the background image data stored in the background image data storage unit 142.
  • the image processing unit 150 acquires data (3D data) representing a 3D model to be combined with the background image. Specifically, the image processing unit 150 causes the display unit 12 to display a name, an icon, or the like representing a 3D model in which 3D data is stored in the 3D data storage unit 144, and allows the user to select a desired 3D model. Then, the image processing unit 150 reads 3D data representing the selected 3D model from the 3D data storage unit 144 in accordance with the signal input from the operation input unit 13 according to the operation by the user.
  • the image processing unit 150 acquires parameters regarding the 3D model to be combined with the background image. Specifically, parameters for displaying each coordinate of the 3D model as a function such as a curve or a curved surface, parameters used in processing such as texture mapping are read out from the 3D data storage unit 144.
  • the image processing unit 150 estimates the position of the 3D model in the 3D space corresponding to the arrangement position of the object in the floor plan shown in FIG.
  • FIG. 5 is a flowchart showing the position estimation process in step S13.
  • 6 to 8 are schematic diagrams for explaining the position estimation process of the 3D model.
  • FIG. 6 shows a state where the omnidirectional camera 20 mounted on a tripod is installed on the floor surface.
  • FIG. 7 shows a state in which the background image L1 obtained by photographing the real estate with the omnidirectional camera 20 is cut along a horizontal plane passing through the central point P 0 ′ of the celestial sphere. In the background image L1 shown in FIG.
  • FIG. 8 shows the background image L1 cut in the vertical plane passing through the center point P 0 ′.
  • images of a floor surface, a wall, a ceiling, an entrance door and the like which are included in the field of view of the omnidirectional camera 20 are captured.
  • the respective parts in real estate dimensions and coordinates (height of the plane coordinates and photographed point) of the shooting point P 0 of the (each room and equipment such as shown in Mato view) is known, these dimensions and Based on the coordinates, estimate the position of the 3D model in 3D space.
  • the height of the imaging point P 0 can be obtained as the height h of the omnidirectional camera 20, including the tripod.
  • the placement setting unit 151 determines the placement of objects on the floor plan.
  • the arrangement setting unit 151 causes the display unit 12 to display a floor plan of a real estate property, and allows the user to specify a place where an object such as furniture is to be arranged on the floor plan.
  • the arrangement setting unit 151 acquires the position information (coordinates) of the object in the floor plan according to the signal input from the operation input unit 13 according to the operation by the user.
  • a point P 1 on the chart summarizes shown in FIG. 4, and that determined as the arrangement position of the user-desired objects.
  • 3D position estimation unit 152 obtains the position of the imaging point P 0 at the time of photographing the background image L1.
  • the 3D position estimation unit 152 derives the position of the 3D model in the 3D space surrounded by the background image L1 based on the floor plan. Specifically, the 3D position estimation unit 152 creates a 3D space in which the shooting point P 0 is associated with the center point P 0 ′ of the celestial sphere, and associates the position and size of each part in the floor plan with the coordinates in 3D space. . Thereby, the coordinates of the point P 1 ′ in the 3D space corresponding to the point P 1 in the floor plan can be obtained. When the position of the 3D model is thus estimated, the process returns to the main routine.
  • step S ⁇ b> 14 following step S ⁇ b> 13 the image processing unit 150 arranges the 3D model at the position of the 3D space estimated in step S ⁇ b> 13.
  • FIG. 9 is a schematic diagram for explaining the image processing shown in FIG. 3, and shows a state in which the 3D model a11 of the sofa is disposed in the 3D space surrounded by the background image L1.
  • the shadow generation unit 154 sets a light source and a material. Specifically, the shadow generation unit 154 generates a light probe image in image-based writing based on the background image L1, and sets a material (such as surface reflection characteristics) of the 3D model.
  • the light probe image is a high dynamic range image recording incident illumination conditions in all directions.
  • the shadow generation unit 154 generates a shadow of the 3D model arranged in the 3D space based on the light source and the material set in step S15. That is, rendering is performed using the generated light probe image. As a result, a shadow is generated in 3D space with the 3D model as a shield, and lighting based on the background image L1 is also made on the surface of the 3D model.
  • FIG. 9 shows a shadow a13 of the 3D model a11 generated by the light source a12.
  • the shadow layer generation unit 155 generates a shadow layer in which the shadow image of the 3D model is rendered. Specifically, the shadow layer generation unit 155 sets the shadow of the 3D model generated in step S16 to a shadow layer L2 located on the same celestial sphere as the background image with the central point P 0 ′ of the 3D space as the projection center.
  • Project FIG. 9 shows an area a14 of an image obtained by projecting the shadow a13 on the shadow layer L2 in the upper layer of the background image L1. In the shadow layer L2, the area other than the area a14 of the image is transparent.
  • the object layer generation unit 153 generates an object layer in which the image of the 3D model is rendered. Specifically, the object layer generation unit 153 places the 3D model placed in the 3D space in step S14 and illuminated in step S16 on the same sky as the background image with the center point P 0 ′ of the 3D space as the projection center. It projects onto the object layer L3 located on the spherical surface. In FIG. 9, an area a15 of an image obtained by projecting the 3D model a11 is shown on the object layer L3 in the upper layer of the shadow layer L2. In the object layer L3, the area other than the area a15 of the image is transparent.
  • the image combining unit 156 generates a combined image in which the background image L1, the shadow layer L2, and the object layer L3 are superimposed in this order. Thereby, an image is obtained in which the image of the 3D model is synthesized at the user's desired position in the background image.
  • the composite image generated in this manner may be displayed as a panoramic image obtained by expanding the celestial sphere, or may be displayed as content for VR.
  • the location of the user-desired object designated on the floor plan is dealt with Position the 3D model of the object at this position and render the image of the 3D model on the sky sphere, causing the object to be freely against the background image and causing a position or size mismatch. It can be arranged without
  • a shadow of a 3D model is generated by a light source set based on a background image, and an image of the shadow is rendered on the sky sphere, so that a composite image without discomfort Can be generated.
  • the 3D model is illuminated by the light source set based on the background image, a more realistic composite image can be generated.
  • FIG. 10 is a flow chart showing position estimation processing in the second embodiment of the present invention.
  • FIG. 11 is a floor plan of real estate, which is an example of a subject of a background image.
  • 12A to 13B are schematic diagrams for explaining the position estimation process in the present embodiment.
  • FIG. 12A is a side view showing a state in which a real estate camera is photographed by the omnidirectional camera 20, and FIG. 12B passes through a central point P 0 ′ the background image L1 obtained by the photographing. It shows the state of cutting in the vertical plane.
  • FIG. 13A is a top view showing that the image is taken by the omnidirectional camera 20, and FIG. 13B shows a state in which the background image L1 is cut along a horizontal plane passing through the center point P 0 ′.
  • the object position is estimated based on the dimensions of the subject with reference to the subject such as furniture or equipment located near the arrangement location of the object desired by the user.
  • the subject used for reference is referred to as a reference object.
  • step S231 the placement setting unit 151 determines the placement of objects on the floor plan.
  • the process of determining the arrangement of objects is the same as that of the first embodiment (see step S131 in FIG. 5).
  • the 3D position estimation unit 152 selects a subject near the arrangement location of the object in the floor plan as a reference object. In the following, it is assumed that the sink shown in FIG. 11 is selected as the reference object 21.
  • the 3D position estimation unit 152 acquires the vertical angle of view and the horizontal angle of view of the visual field area in which the reference object 21 captured in the background image L1 is fully contained. Specifically, as shown in FIGS. 12B and 13B, the area a21 where the reference object 21 is captured is extracted from the background image L1, and the vertical angle of view ⁇ and the horizontal angle of view ⁇ are measured. For the vertical angle of view ⁇ , an area corresponding to the height h from the floor surface to the omnidirectional camera 20 is measured.
  • the 3D position estimation unit 152 generates a 3D space surrounded by the background image L1 and associating the shooting point P 0 in the floor plan with the center point P 0 ′ of the sky.
  • the 3D position estimation unit 152 estimates the position of the reference object 21 in 3D space.
  • the position of the reference object 21 may be, for example, from the center point and the shooting point P 0 of the reference object 21 represented by a representative point such points the shortest distance.
  • a representative point such points the shortest distance.
  • FIGS. 12A and 12B from the vertical view angle ⁇ and the height h of the shooting point P 0 according to the following equation (1), from the shooting point P 0 to the representative point of the reference object 21 The distance d 1 can be calculated.
  • d 1 h / tan ⁇ (1)
  • the horizontal angle ⁇ and the distance d 1 Tokyo it is possible to calculate the position of the reference object 21 in the horizontal direction.
  • the 3D position estimation unit 152 derives the position of the 3D model in 3D space. For example, as shown in FIG. 11, when the align objects to the reference object 21 (sink), also in the 3D space, near the position of the reference object 21 estimated in step S235 (e.g., shifted by a distance x 4 The 3D model should be placed at the position). Thereafter, the process returns to the main routine.
  • the second embodiment of the present invention even in the case where the coordinates of the shooting point P 0 are unknown, if the height h of the shooting point P 0 is known, then in the 3D space It is possible to estimate the position of the 3D model.
  • the position of the 3D model in 3D space can be estimated relatively easily because the position of the shooting point is known.
  • the configuration of the image processing apparatus according to the third embodiment of the present invention is the same as that of the first embodiment (see FIG. 1) as a whole, and the 3D position estimation process performed by the 3D position estimation unit 152 It differs from the first embodiment.
  • FIG. 14 is a flowchart showing an image processing method according to the present embodiment.
  • FIG. 15 is a floor plan of real estate, which is an example of the subject of the background image.
  • FIG. 16 is a schematic view showing an example of the background image, and shows a state in which the background image L1 is cut along a horizontal plane passing through the center point P 0 ′.
  • the present embodiment based on the measurement value of the subject distance (depth) and dimensions of the furniture and interior disposed real estate, to identify the imaging point P 0 in the real estate, to estimate the object position.
  • step S30 the image processing unit 150 acquires a background image for combining a 3D model.
  • the image processing unit 150 based on the background image L1, measures the size of the distance and the subject from the photographic point P 0 to the object at multiple locations, and stores the measurement data.
  • the method of measuring the distance and size of the subject is the same as that described in the second embodiment (see FIGS. 12A to 13B).
  • the number of subjects to measure the distance and size is arbitrary, as an example, by measuring the distances to the three subjects, it is possible to identify the coordinates of the photographing point P 0 in the horizontal plane.
  • the image processing unit 150 associates measurement data of the subject with the background image.
  • FIG. 16 shows a state in which the distances d 31 to d 38 measured in step S 31 and the dimensions x 31 , x 32 , y 31 and y 32 in the horizontal plane are associated with the background image L 1. The dimensions in the vertical plane are likewise associated with the background image.
  • step S33 the image processing unit 150 acquires 3D data representing a 3D model to be combined with the background image L1. Further, in step S34, the image processing unit 150 acquires parameters related to the 3D model to be combined with the background image L1.
  • steps S33 and S34 are the same as steps S11 and S12 shown in FIG. 3, respectively.
  • the image processing unit 150 determines the arrangement of the object on the floor plan.
  • the process of determining the arrangement of objects is the same as that of the first embodiment (see step S131 in FIG. 5). In the following, the point P 3 of the drawing Mato shown in FIG. 15, and that determined as the arrangement position of the object.
  • step S36 the image processing unit 150 acquires the position of the imaging point P 0 at the time of photographing the background image L1. Specifically, the 3D position estimation unit 152 calculates the coordinates of the shooting point P 0 in the floor plan from the distance measured in step S 31 (see FIG. 15).
  • the image processing unit 150 is surrounded by the background image L1 based on the measurement data associated with the background image L1, and associates the shooting point P 0 in the floor plan with the center point P 0 'of the sky. Create a 3D space.
  • the image processing unit 150 arranges the 3D model in the 3D space based on the arrangement of the objects determined in step S35. Specifically, the 3D position estimation unit 152 calculates the coordinates of the placement location (point P 3 ) of the object with respect to the shooting point P 0 in the floor plan, and the location in 3D space corresponding to the placement location (FIG. Estimate a point P 3 ′) and place a 3D model at this position.
  • steps S39 to S43 are the same as the steps S15 to S19 shown in FIG. 3, respectively.
  • the imaging point P 0 in the diagram taken between can be accurately obtained. Therefore, the position in the 3D space corresponding to the placement location of the object determined on the floor plan can be estimated more accurately.
  • the distance and the size of the subject are measured from the background image (see step S31), the distance and the size may be measured at the time of photographing.
  • the distance and the size may be measured at the time of photographing.
  • Measured data is stored in association with image data. In this case, it is possible to more accurately reproduce the 3D space corresponding to the real estate property.
  • the light source is set based on the background image by image-based lighting, but the method of setting the light source is not limited to this, and various known methods may be applied.
  • a light source a window, a lighting device, etc.
  • shadows of the 3D model are generated by performing global illumination calculation (radiosity method, photon mapping method, etc.) You may.
  • FIG. 17 is a network diagram showing a configuration example of a system to which the image processing apparatus according to the first to third embodiments of the present invention is applied.
  • the system 1 shown in FIG. 17 includes an image processing apparatus (server) 30, a real estate management terminal 31, an object management terminal 32, and an image display terminal 33, and these devices are connected via a communication network N.
  • a communication network N for example, an Internet line, a telephone line, a LAN, a dedicated line, a mobile communication network, a communication line such as WiFi (Wireless Fidelity), Bluetooth (registered trademark), or a combination thereof is used.
  • the communication network N may be wired, wireless, or a combination of these.
  • the image processing apparatus 30 is constituted by a host computer with high arithmetic processing capability, functions as a server that manages the system 1 in a centralized manner, and executes the image processing described in the first to third embodiments.
  • the computer constituting the image processing apparatus 30 does not necessarily have to be one, and may be composed of a plurality of computers distributed on the communication network N.
  • the real estate management terminal 31 manages information on a real estate rental property and a real estate sale property.
  • the real estate management terminal 31 stores image data (room arrangement data and background image data) regarding the real estate property as well as information regarding the transaction such as the location of the real estate property, the owner, and rental or trading conditions.
  • the floor plan data and the background image data are uploaded from the real estate management terminal 31 to the image processing apparatus 30, and used for image processing.
  • the object management terminal 32 creates and stores information related to an object such as furniture to be combined with a background image, that is, data and parameters of a 3D model representing the object, and a setting file. These pieces of information are uploaded from the object management terminal 32 to the image processing apparatus 30 and used for image processing.
  • the image display terminal 33 is a terminal device for displaying an image combined by the image processing device 30 and causing the user to view it.
  • the image display terminal 33 causes the user to recognize a three-dimensional virtual space (VR) by displaying a two-dimensional still image or a moving image.
  • a goggle type dedicated device so-called head mounted display (HMD)
  • HMD head mounted display
  • two lenses are respectively attached at positions corresponding to the left and right eyes of the user.
  • two images provided with parallax are respectively displayed in the two left and right areas provided on the screen.
  • the user can recognize an image three-dimensionally (stereoscopically) by viewing the two images with the left and right eyes respectively through the two lenses.
  • the image display terminal 33 is not limited to the HMD, and a tablet terminal or a stationary display may be used.
  • an image obtained by combining a three-dimensional model of an object such as furniture with a background image may be displayed as a panoramic image.
  • the image processing device 30 uses the image data of the background image uploaded from the real estate management terminal 31, the data of the 3D model uploaded from the object management terminal 32, etc. A composite image in which the model is composited is generated, and image data of the composite image is transmitted to the image display terminal 33. Thus, the composite image can be displayed on the image display terminal 33.
  • various terminal devices may be further connected to the communication network N.
  • a terminal device may be separately provided which is used to cause the user to select a background image or an object to be combined with the background image, or to specify an arrangement position of the object in the floor plan.
  • a plurality of image display terminals 33 may be provided so that the same content can be simultaneously viewed on these image display terminals 33.
  • FIG. 18 is a schematic view showing another configuration example of a system to which the image processing apparatus according to the first to third embodiments of the present invention is applied.
  • bold solid arrows represent the flow of data to be downloaded
  • dashed arrows represent the flow of data to be uploaded.
  • the system 2 shown in FIG. 18 includes an image processing apparatus (editor) 40, an image management server 41, an object management apparatus 42, a converter 43, a service management apparatus 44, and an image display terminal 45. These respective devices are connected to one another via a communication network.
  • the image processing apparatus (editor) 40 functions as an editor for processing or editing a background image by executing the image processing described in the first to third embodiments.
  • the image management server 41 manages images used in the system 2.
  • the object management device 42 is a setting file in which data representing a 3D model of an object such as furniture or furniture and related information such as a texture (hereinafter collectively referred to as a 3D file) and location information such as furniture are described Create or store from an external source.
  • a 3D file data representing a 3D model of an object such as furniture or furniture and related information such as a texture (hereinafter collectively referred to as a 3D file) and location information such as furniture are described Create or store from an external source.
  • FBX illustrated in FIG. 18 is an example of a file format of data representing a 3D model.
  • the file format usable in this system is not limited to this.
  • the converter 43 converts data representing the 3D model into a file that can be read into the image processing device 40 and is associated with related information such as a texture.
  • the service management device 44 includes a storage unit for storing floor plan data and background image data, and also includes a screen capable of displaying a floor plan and a background image, and displays the floor plan on the screen to display an object such as furniture. It is used to allow the user to specify the placement position of the image, or to display on the screen the composite image 44b in which the background image 44a and the 3D model of the object are composited and to present it to the user.
  • the configuration of the image display terminal 45 is the same as that of the image display terminal 33 shown in FIG. 17, and is used when a user views a composite image in which a 3D model is combined with a background image as a VR.
  • the object management device 42 uploads data of a 3D model of an object such as furniture to the image management server 41.
  • the converter 43 downloads 3D model data from the image management server 41, converts the 3D model, a texture, and the like into a file, and uploads the file to the image management server 41.
  • the image processing apparatus 40 downloads the file converted in this way from the image management server 41, downloads the setting file from the object management apparatus 42, and further downloads the background image data from the service management apparatus 44. Then, the image processing device 40 performs each process (setting of illumination, adjustment of material, etc.) for combining the 3D model with the background image, and uploads the file of the created combined image data to the service management device 44.
  • the image display terminal 45 downloads a file of composite image data from the service management apparatus 44, and reproduces a composite image, that is, home staging content based on the file.
  • the present invention is not limited to the above first to third embodiments and modifications, and various combinations of components disclosed in the above first to third embodiments can be made as appropriate.
  • the invention can be formed. For example, it may be formed by excluding some components from all the components shown in the first to third embodiments and the modification, or in the first to third embodiments and the modification. The components shown may be combined appropriately.
  • image processing apparatus 11 communication interface 12 display section 13 operation input section 14 storage section 15 processor 20 all-sky camera 21 reference object 30 image processing apparatus (server) 31 Real Estate Management Terminal 32 Object Management Terminal 33 Image Display Terminal 40 Image Processing Device (Editor) 41 image management server 42 object management device 43 converter 44 service management device 45 image display terminal 141 program storage unit 142 background image data storage unit 143 floor plan data storage unit 144 3D data storage unit 145 composite image data storage unit 150 image processing unit 151 placement setting unit 152 3D position estimation unit 153 object layer generation unit 154 shadow generation unit 155 shadow layer generation unit 156 image combining unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Provided is an image processing device which is capable of synthesizing freely without discomfort a 3D model of an object such as furniture with a two-dimensional background image. The image processing device is provided with: an arrangement setting unit which sets an arrangement site of an object on a floor plan of a real estate item; a 3D location estimation unit which estimates the location of the object, which corresponds to the arrangement site of the object on the floor plan, in a 3D space surrounded by a celestial sphere image acquired as a background image; a shadow generation unit which generates a shadow generated by arranging a 3D model at the object location; an object layer generation unit which generates a first layer obtained by rendering an image obtained by projecting the 3D model onto the surface of the celestial sphere, which is the same as the background image, with the center of the 3D space employed as the projection center; a shadow layer generation unit which generates a second layer by rendering an image obtained by projecting the shadow on the surface of the celestial sphere with the center of the 3D space employed as the projection center; and an image synthesizing unit which synthesizes the first and second layers with the background image.

Description

画像処理装置、画像処理方法、及び画像処理プログラムIMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
 本発明は、被写体を撮影して得られた背景画像にオブジェクトの画像を合成する画像処理装置、画像処理方法、及び画像処理プログラムに関する。 The present invention relates to an image processing apparatus, an image processing method, and an image processing program for combining an image of an object with a background image obtained by photographing an object.
 近年、不動産業界においては、売買取引の際に行われるホームステージングと呼ばれるサービスが知られるようになっている。ホームステージングとは、戸建てやマンション等の不動産物件の売却を促進するために、不動産物件に対してインテリアコーディネートを施すサービスのことである。具体的には、購入者のターゲット層や流行、物件の特徴などを考慮して選択された家具や調度品、小物などを配置する。購入希望者(検討中を含む)は、ホームステージングが適用された物件を内覧することで、当該物件に住んだ状態をイメージし易くなるため、売却が円滑に進み易いと言われている。 BACKGROUND ART In recent years, in the real estate industry, a service called home staging, which is performed at the time of trading, has become known. Home staging is a service that performs interior coordination on real estate properties in order to promote the sale of real estate properties such as single-family homes and condominiums. Specifically, the furniture, furniture, accessories, etc. selected in consideration of the target layer of the purchaser, fashion, characteristics of the property, etc. are placed. Purchasers (including those under consideration) are said to be able to facilitate sales smoothly because it becomes easy to imagine the state of living in the property by previewing the property to which home staging is applied.
 他方、最近では、仮想現実(バーチャルリアリティ、以下VRとも記す)技術を利用して不動産物件の内覧させるサービスも行われている。VRの画像表示は、通常、ヘッドマウントディスプレイ(以下、「HMD」とも記す)と呼ばれるメガネ型又はゴーグル型の表示装置において行われる。HMDには、視差が設けられた2つの画像が表示される画面と、ユーザの両眼の視線方向にそれぞれ設置された2つのレンズとが内蔵されており、ユーザは、これらのレンズを介して画面を見ることにより、画像を立体的に認識することができる。また、HMDにはジャイロセンサや加速度センサが内蔵されており、これらのセンサによって検出したユーザの頭部の動きに応じて、画面に表示される画像が変化する。それにより、ユーザは、あたかも立体的な画像の中に入り込んだような体験をすることができる。 On the other hand, recently, there is also a service of previewing real estate properties using virtual reality (hereinafter, also referred to as VR) technology. Image display of a VR is usually performed on a glasses or goggle type display device called a head mounted display (hereinafter also referred to as "HMD"). The HMD incorporates a screen on which two images provided with parallax are displayed and two lenses respectively installed in the direction of the eyes of the user's eyes, and the user can By looking at the screen, it is possible to three-dimensionally recognize the image. In addition, a gyro sensor and an acceleration sensor are built in the HMD, and an image displayed on the screen changes in accordance with the movement of the head of the user detected by these sensors. As a result, the user can experience as if it were embedded in a three-dimensional image.
 不動産物件のインテリアのシミュレーションに関する技術として、特許文献1には、間取図画像や室内撮影写真等を含む物件情報から不動産物件の3Dモデルを生成し、生成した3Dモデルを元にバーチャルリアリティでの室内の閲覧や、インテリアのシミュレーションなど行う技術が開示されている(特許文献1の段落0015参照)。詳細には、事前準備された3D家具モデルのスケールを調整して、物件の3Dモデルに配置する(特許文献1の段落0070参照)。 As a technology relating to the simulation of the interior of a real estate property, Patent Document 1 generates a 3D model of a real estate property from property information including a floor plan image, an indoor shot photograph, etc., and uses virtual reality based on the generated 3D model. A technique for browsing indoors, simulating interiors, etc. is disclosed (see paragraph 0015 of Patent Document 1). Specifically, the scale of the 3D furniture model prepared in advance is adjusted to be placed in the 3D model of the property (see paragraph 0070 of Patent Document 1).
 また、実写の背景画像と三次元CGオブジェクトとの合成画像データを生成する技術に関して、特許文献2には、2つ以上の視点からなる環境マップデータから周辺環境の三次元形状データを生成する周辺環境三次元形状データ生成手段と、周辺環境三次元形状データを光源として、仮想被写体を背景画像データに合成する仮想被写体合成手段と、を備える画像処理装置が開示されている。 Further, with regard to a technique for generating composite image data of a background image of a real image and a three-dimensional CG object, Patent Document 2 discloses the periphery for generating three-dimensional shape data of the surrounding environment from environment map data consisting of two or more viewpoints. An image processing apparatus is disclosed which comprises: environmental three-dimensional shape data generation means; and virtual subject combination means for combining a virtual subject with background image data by using ambient three-dimensional shape data as a light source.
 また、特許文献3には、前景用画像、当該前景用画像の背景となる背景用画像、当該背景用画像の奥行きを示す奥行き情報を記憶する記憶手段と、前景用画像の背景用画像上の合成位置を取得する合成位置取得手段と、背景用画像の奥行き情報から生成した空間の座標を背景用画像の座標に投影する投影手段と、投影手段による投影情報及び合成位置から合成位置の空間座標を取得する空間座標取得手段と、背景用画像を撮影した位置から見た光源の位置及び照度の情報を保持する光源パラメータ及び合成位置の空間座標から、光源と合成位置の空間座標との距離、又は合成位置の空間座標から見た光源の照度を算出する算出手段と、合成位置の空間座標及び算出した値に応じて、光源パラメータを変更する光源パラメータ変更手段と、光源パラメータを用いて、前景用画像にライティング効果を施すライティング手段と、ライティング手段によってライティング効果を施された前景用画像を、背景画像の合成位置に合成する合成手段と、を有する映像処理装置が開示されている。 Further, in Patent Document 3, a foreground image, a background image serving as the background of the foreground image, storage means for storing depth information indicating the depth of the background image, and the background image on the foreground image are stored. Composite position acquisition means for acquiring the composite position, projection means for projecting the coordinates of the space generated from the depth information of the background image on the coordinates of the background image, projection information by the projection means and spatial coordinates of the composite position from the composite position The distance between the light source and the space coordinates of the synthesis position from the light source parameters and space coordinates of the synthesis position holding the information of the position and illuminance of the light source viewed from the position where the background image was taken Alternatively, calculation means for calculating the illuminance of the light source viewed from the space coordinates of the combined position, light source parameter changing means for changing the light source parameter according to the space coordinates of the combined position and the calculated value, light Disclosed is an image processing apparatus comprising: lighting means for applying a lighting effect to a foreground image using parameters; and combining means for combining the foreground image subjected to the lighting effect by the lighting means to a combined position of the background image It is done.
 また、不動産業者向けのサービスとして、非特許物件1には、株式会社リコー製のデジタルカメラ(製品名:THETA)を用いて不動産物件を撮影することにより360°画像(天球画像)し、VR用のコンテンツに加工して不動産業者に提供するサービスが開示されている。 In addition, as a service for real estate agents, non-patent property 1 has a 360 ° image (sky image) by photographing a real property using a digital camera (product name: THETA) manufactured by Ricoh Co., Ltd., for VR The service which is processed into the contents of and provided to the real estate agent is disclosed.
特許第6116746号公報Patent No. 6116746 gazette 特開2013-152683号公報JP, 2013-152683, A 特開2013-149219号公報JP, 2013-149219, A
 VRにより利用可能なホームステージングのコンテンツを提供する手法として、不動産物件の3次元的な背景モデルに、家具等のオブジェクトの3次元モデルを配置する手法を用いる場合、3Dデータの構築(レンダリング)に多くの計算コストを要し時間がかかると共に、データを表示する際にも多くの計算コストを要する。そのため、高スペックな機材が必要となり、ホームステージングのコンテンツを手軽に利用することが困難となってしまう。 When using a method of arranging a three-dimensional model of an object such as furniture as a three-dimensional background model of a real estate property as a method of providing home staging content usable by VR, in the construction (rendering) of 3D data It is expensive, time consuming, and expensive when displaying data. Therefore, high-spec equipment is required, making it difficult to easily use home staging content.
 VRにより利用可能なホームステージングのコンテンツをより簡易的に作成する手法として、実在する被写体を写した2次元の背景画像に対し、3次元モデル(家具モデル)を合成することも考えられる。しかしながら、2次元の背景画像と3次元モデルとでは、コンピュータグラフィックス空間における位置が異なっているため、背景画像に対して家具が浮いているように見えたり、奥行きが異なって見えたりするなど、コンテンツを利用するユーザにとって違和感が生じ易く、2次元の背景画像に3次元モデルを自在に配置することは困難である。 It is also conceivable to combine a three-dimensional model (furniture model) with a two-dimensional background image of an existing subject as a method of more easily creating home staging content usable by VR. However, the two-dimensional background image and the three-dimensional model have different positions in the computer graphics space, so the furniture may appear to float in the background image or the depth may appear different, etc. It is easy for the user who uses the content to feel uncomfortable, and it is difficult to freely arrange the three-dimensional model on the two-dimensional background image.
 本発明は上記に鑑みてなされたものであって、2次元の背景画像に対し、家具等のオブジェクトの3次元モデルを自在に、且つ違和感なく合成することができる画像処理装置、画像処理方法、及び画像処理プログラムを提供することを目的とする。 The present invention has been made in view of the above, and is an image processing apparatus and an image processing method capable of freely synthesizing a three-dimensional model of an object such as furniture with a two-dimensional background image freely. And providing an image processing program.
 上記課題を解決するために、本発明の一態様である画像処理装置は、不動産物件が写った背景画像に対し、前記不動産物件に仮想的に配置されるオブジェクトの画像を合成する画像処理装置であって、前記不動産物件の間取図上における前記オブジェクトの配置箇所を設定する配置設定部と、前記不動産物件を撮影することにより背景画像として取得された天球画像によって囲まれる3次元グラフィックス空間において、前記間取図上における前記オブジェクトの配置箇所に対応する位置であるオブジェクト位置を推定する3D位置推定部と、前記オブジェクト位置に前記オブジェクトの3次元モデルを配置することによって生じる影を生成する影生成部と、前記3次元グラフィックス空間の中心を投影中心として、前記オブジェクト位置に配置された前記3次元モデルを前記背景画像と同じ天球面上に投影した像をレンダリングすることにより、第1のレイヤーを生成するオブジェクトレイヤー生成部と、前記3次元グラフィックス空間の中心を投影中心として、前記影を前記天球面上に投影した像をレンダリングすることにより、第2のレイヤーを生成する影レイヤー生成部と、前記背景画像に前記第2のレイヤー及び前記第1のレイヤーを合成する画像合成部と、を備えるものである。 In order to solve the above problems, an image processing apparatus according to an aspect of the present invention is an image processing apparatus that combines an image of an object virtually arranged in the real estate property with a background image on which the real estate property is captured. In a three-dimensional graphics space surrounded by a celestial sphere image acquired as a background image by capturing an arrangement position of the object on the floor plan of the real estate property and photographing the real estate property. A 3D position estimation unit for estimating an object position which is a position corresponding to an arrangement position of the object on the floor plan, and a shadow for generating a shadow caused by arranging a three-dimensional model of the object at the object position. A generation unit, and the object position with the center of the three-dimensional graphics space as a projection center An object layer generation unit that generates a first layer by rendering an image obtained by projecting the arranged three-dimensional model onto the same celestial sphere as the background image; and a projection center of the three-dimensional graphics space A shadow layer generation unit that generates a second layer by rendering an image obtained by projecting the shadow onto the sky sphere, and combining the second layer and the first layer with the background image And an image combining unit.
 上記画像処理装置は、前記間取図の画像を表示する表示部と、ユーザによりなされる操作を受け付ける操作入力部と、をさらに備え、前記配置設定部は、前記操作入力部が受け付けた、前記表示部に表示された前記間取図の画像に対する操作に基づいて、前記配置箇所を設定しても良い。 The image processing apparatus further includes a display unit that displays the image of the floor plan, and an operation input unit that receives an operation performed by the user, and the arrangement setting unit receives the operation input unit. The arrangement location may be set based on an operation on the image of the floor plan displayed on the display unit.
 上記画像処理装置において、前記影生成部は、前記背景画像に基づいて光源を設定し、該光源に基づいて前記3次元モデルの影を生成しても良い。 In the image processing apparatus, the shadow generation unit may set a light source based on the background image, and may generate a shadow of the three-dimensional model based on the light source.
 上記画像処理装置において、前記3D位置推定部は、前記間取図における各部の寸法に基づいて、前記オブジェクト位置を推定しても良い。 In the image processing apparatus, the 3D position estimation unit may estimate the object position based on the dimensions of each unit in the floor plan.
 上記画像処理装置において、前記3D位置推定部は、前記配置箇所の近傍に位置する被写体の距離及び寸法に基づいて、前記オブジェクト位置を推定しても良い。 In the image processing apparatus, the 3D position estimation unit may estimate the object position based on a distance and a dimension of a subject located in the vicinity of the arrangement location.
 上記画像処理装置において、前記3D位置推定部は、前記背景画像から計測された被写体の距離及び寸法に基づいて撮影点を取得し、該撮影点に基づいて前記オブジェクト位置を推定しても良い。 In the image processing apparatus, the 3D position estimation unit may acquire a shooting point based on a distance and a dimension of a subject measured from the background image, and estimate the object position based on the shooting point.
 上記画像処理装置において、前記3D位置推定部は、実測された被写体の距離及び寸法に基づいて撮影点を取得し、該撮影点に基づいて前記オブジェクト位置を推定しても良い。 In the image processing apparatus, the 3D position estimation unit may acquire a shooting point based on the measured distance and dimension of the subject, and estimate the object position based on the shooting point.
 本発明の別の態様である画像処理方法は、不動産物件が写った背景画像に対し、前記不動産物件に仮想的に配置されるオブジェクトの画像を合成する画像処理方法であって、前記不動産物件の間取図上における前記オブジェクトの配置箇所を設定するステップと、前記不動産物件を撮影することにより背景画像として取得された天球画像によって囲まれる3次元グラフィックス空間において、前記間取図上における前記オブジェクトの配置箇所に対応する位置であるオブジェクト位置を推定するステップと、前記オブジェクト位置に前記オブジェクトの3次元モデルを配置することによって生じる影を生成するステップと、前記3次元グラフィックス空間の中心を投影中心として、前記オブジェクト位置に配置された前記3次元モデルを前記背景画像と同じ天球面上に投影した像をレンダリングすることにより、第1のレイヤーを生成するステップと、前記3次元グラフィックス空間の中心を投影中心として、前記影を前記天球面上に投影した像をレンダリングすることにより、第2のレイヤーを生成するステップと、前記背景画像に前記第2のレイヤー及び前記第1のレイヤーを合成するステップと、を含むものである。 An image processing method according to another aspect of the present invention is an image processing method for synthesizing an image of an object virtually arranged in the real estate property with respect to a background image showing the real estate property. In the three-dimensional graphics space surrounded by the celestial sphere image acquired as a background image by setting an arrangement location of the object on the floor plan and photographing the real estate property, the object on the floor plan Estimating an object position which is a position corresponding to an arrangement position of the object, generating a shadow generated by arranging the three-dimensional model of the object at the object position, and projecting the center of the three-dimensional graphics space In front of the three-dimensional model placed at the object position as a center Generating the first layer by rendering an image projected on the same celestial sphere as the background image, and projecting the shadow on the celestial sphere with the center of the three-dimensional graphics space as the projection center The steps of generating a second layer by rendering an image, and combining the second layer and the first layer with the background image.
 本発明のさらに別の態様である画像処理プログラムは、不動産物件が写った背景画像に対し、前記不動産物件に仮想的に配置されるオブジェクトの画像を合成する画像処理プログラムであって、前記不動産物件の間取図上における前記オブジェクトの配置箇所を設定するステップと、前記不動産物件を撮影することにより背景画像として取得された天球画像によって囲まれる3次元グラフィックス空間において、前記間取図上における前記オブジェクトの配置箇所に対応する位置であるオブジェクト位置を推定するステップと、前記オブジェクト位置に前記オブジェクトの3次元モデルを配置することによって生じる影を生成するステップと、前記3次元グラフィックス空間の中心を投影中心として、前記オブジェクト位置に配置された前記3次元モデルを前記背景画像と同じ天球面上に投影した像をレンダリングすることにより、第1のレイヤーを生成するステップと、前記3次元グラフィックス空間の中心を投影中心として、前記影を前記天球面上に投影した像をレンダリングすることにより、第2のレイヤーを生成するステップと、前記背景画像に前記第2のレイヤー及び前記第1のレイヤーを合成するステップと、をコンピュータに実行させるものである。 An image processing program according to still another aspect of the present invention is an image processing program for synthesizing an image of an object virtually placed on the real estate property against a background image showing the real estate property, and the real estate property In the three-dimensional graphics space surrounded by the celestial sphere image acquired as a background image by setting the arrangement location of the object on the floor plan, and capturing the real estate property, the three-dimensional graphics space on the floor plan Estimating an object position which is a position corresponding to an arrangement position of the object, generating a shadow generated by arranging the three-dimensional model of the object at the object position, and determining a center of the three-dimensional graphics space. The front located at the object position as the projection center Generating a first layer by rendering an image in which a three-dimensional model is projected onto the same celestial sphere as the background image, and setting the shadow as the projection center with the center of the three-dimensional graphics space as the projection center Causing a computer to execute the steps of generating a second layer by rendering an image projected on a spherical surface, and combining the second layer and the first layer with the background image. is there.
 本発明によれば、間取図上におけるオブジェクトの配置箇所に対応する3次元グラフィックス空間におけるオブジェクト位置を推定し、このオブジェクト位置にオブジェクトの3次元モデルを配置した場合の3次元モデルの像及び3次元モデルの影の像をそれぞれレンダリングしたレイヤーを生成し、これらのレイヤーを背景画像に合成するので、2次元の背景画像に対し、家具等のオブジェクトの3次元モデルを自在に、且つ違和感なく合成することが可能となる。 According to the present invention, an object position in the three-dimensional graphics space corresponding to the arrangement position of the object on the floor plan is estimated, and an image of the three-dimensional model when the three-dimensional model of the object is arranged at the object position Since the layer which rendered the image of the shadow of the three-dimensional model respectively is generated and these layers are combined with the background image, the three-dimensional model of the object such as furniture can be freely and uncomplicatedly with the two-dimensional background image. It becomes possible to synthesize.
本発明の第1の実施形態に係る画像処理装置の概略構成を示すブロック図である。FIG. 1 is a block diagram showing a schematic configuration of an image processing apparatus according to a first embodiment of the present invention. 図1に示す画像処理装置が実行する画像合成処理を説明するための模式図である。It is a schematic diagram for demonstrating the image compositing process which the image processing apparatus shown in FIG. 1 performs. 図1に示す画像処理装置が実行する画像合成処理を説明するための模式図である。It is a schematic diagram for demonstrating the image compositing process which the image processing apparatus shown in FIG. 1 performs. 本発明の第1の実施形態に係る画像処理方法を示すフローチャートである。It is a flowchart which shows the image processing method which concerns on the 1st Embodiment of this invention. 背景画像の被写体の一例である不動産物件の間取図であるIt is a floor plan of real estate that is an example of the subject of the background image 図3に示す位置推定処理を示すフローチャートである。It is a flowchart which shows the position estimation process shown in FIG. 3Dモデルの位置推定処理を説明するための模式図である。It is a schematic diagram for demonstrating the position estimation process of a 3D model. 3Dモデルの位置推定処理を説明するための模式図である。It is a schematic diagram for demonstrating the position estimation process of a 3D model. 3Dモデルの位置推定処理を説明するための模式図である。It is a schematic diagram for demonstrating the position estimation process of a 3D model. 図3に示す画像処理を説明するための模式図である。It is a schematic diagram for demonstrating the image processing shown in FIG. 本発明の第2の実施形態における位置推定処理を示すフローチャートである。It is a flowchart which shows the position estimation process in the 2nd Embodiment of this invention. 背景画像の被写体の一例である不動産物件の間取図である。It is a floor plan of a real estate property which is an example of a subject of a background image. 図10に示す位置推定処理を説明するための模式図である。It is a schematic diagram for demonstrating the position estimation process shown in FIG. 図10に示す位置推定処理を説明するための模式図である。It is a schematic diagram for demonstrating the position estimation process shown in FIG. 図10に示す位置推定処理を説明するための模式図である。It is a schematic diagram for demonstrating the position estimation process shown in FIG. 図10に示す位置推定処理を説明するための模式図である。It is a schematic diagram for demonstrating the position estimation process shown in FIG. 本発明の第3の実施形態に係る画像処理方法を示すフローチャートである。It is a flowchart which shows the image processing method which concerns on the 3rd Embodiment of this invention. 背景画像の被写体の一例である不動産物件の間取図である。It is a floor plan of a real estate property which is an example of a subject of a background image. 本発明の第3の実施形態における画像処理方法を説明するための模式図である。It is a schematic diagram for demonstrating the image processing method in the 3rd Embodiment of this invention. 本発明の第1~第3の実施形態に係る画像処理装置が適用されるシステムの構成例を示すネットワーク図である。FIG. 1 is a network diagram showing a configuration example of a system to which an image processing apparatus according to first to third embodiments of the present invention is applied. 本発明の第1~第3の実施形態に係る画像処理装置が適用されるシステムの別の構成例を示す模式図である。FIG. 15 is a schematic view showing another configuration example of a system to which the image processing apparatus according to the first to third embodiments of the present invention is applied.
 以下、本発明の実施形態に係る画像処理装置、画像処理方法、及び画像処理プログラムについて、図面を参照しながら説明する。なお、これらの実施形態によって本発明が限定されるものではない。また、各図面の記載において、同一部分には同一の符号を付して示している。 Hereinafter, an image processing apparatus, an image processing method, and an image processing program according to an embodiment of the present invention will be described with reference to the drawings. Note that the present invention is not limited by these embodiments. Further, in the descriptions of the respective drawings, the same parts are denoted by the same reference numerals.
 本発明の実施形態に係る画像処理装置は、不動産物件が写った天球画像を背景画像とし、この背景画像に対して、家具や調度品等のオブジェクトの3次元モデル(3Dモデル)の像を合成する画像処理を実行することにより、あたかも不動産物件に上記オブジェクトを配置して撮影を行ったかのような画像を生成するものである。 The image processing apparatus according to the embodiment of the present invention uses a celestial sphere image with real estate property as a background image, and synthesizes an image of a three-dimensional model (3D model) of an object such as furniture or furniture with this background image. By performing the image processing to be performed, it is possible to generate an image as if the object was placed on the real estate property and shooting was performed.
 背景画像として使用される天球画像は、360°画像、全天球画像、全方位画像などとも呼ばれ、撮影箇所の周囲ほぼ360°の被写体が写った画像のことである。背景画像(天球画像)は、静止画であっても良いし、動画であっても良い。 The celestial sphere image used as a background image is also referred to as a 360 ° image, a full celestial sphere image, an omnidirectional image, etc., and is an image in which an object around 360 ° around the shooting location is captured. The background image (sky image) may be a still image or a moving image.
 本画像処理装置においては、仮想的に透明な床を背景画像上に配置し、この床に3Dモデルの影を投射することで、3Dモデルの実在感を高めている。また、背景画像を光源として3Dモデルをライティングすることで違和感を軽減している。 In the present image processing apparatus, a virtually transparent floor is disposed on a background image, and the shadow of the 3D model is projected on the floor to enhance the realism of the 3D model. Also, by lighting the 3D model with the background image as the light source, the discomfort is reduced.
 ここで、不動産物件の背景を含めた画像全体を3次元CGで構成するには、描画ごとにレンダリングが必要となり、この処理を実行するためには多くの計算コストがかかるため、高スペックな機材が必要であった。しかし、本実施形態に係る画像処理装置においては、2次元の背景画像に、仮想的に配置されるオブジェクトの3Dモデルを合成するため、計算コストはそれほどかからない。 Here, in order to construct the entire image including the background of the real estate property with three-dimensional CG, rendering is necessary for each drawing, and it takes a lot of computational cost to execute this processing, so high-spec equipment Was necessary. However, in the image processing apparatus according to the present embodiment, since the 3D model of the virtually arranged object is combined with the two-dimensional background image, the calculation cost is not so high.
 また、本画像処理装置においては、2次元の背景画像の射影に合わせて、オブジェクトの3Dモデルのデータをレンダリングするので、奥行きのズレがなく、違和感なく表示することができる。 Further, in the present image processing apparatus, since the data of the 3D model of the object is rendered in accordance with the projection of the two-dimensional background image, there is no gap in the depth, and it is possible to display without discomfort.
 このようにして背景画像に3Dモデルを合成したコンテンツを提示することにより、不動産物件の購入や賃借を検討している顧客は、不動産物件における家具等のコーディネートを、画面上で納得いくまで試すことができる。従って、コストをそれほどかけることなく、不動産物件の購入や賃借の取引を促進することが可能となる。
 以下、本発明の実施形態に係る画像処理装置について、詳細に説明する。
In this way, by presenting the content of the 3D model combined with the background image, the customer who is considering purchasing or renting a real estate property tries to coordinate furniture etc. on the real estate property until he or she understands on the screen Can. Therefore, it is possible to promote the purchase and rental transactions of real estate properties without increasing the cost.
Hereinafter, an image processing apparatus according to an embodiment of the present invention will be described in detail.
(第1の実施形態)
 図1は、本発明の第1の実施形態に係る画像処理装置の概略構成を示すブロック図である。図2A及び図2Bは、図1に示す画像処理装置が実行する画像合成処理を説明するための模式図である。このうち、図2Aは、不動産物件の間取図を示す。また、図2Bは、該不動産物件のリビングルーム(図2A参照)において撮影を行うことにより取得した背景画像(天球画像)に、ソファ及びローテーブル、タンス、並びに絵の3Dモデルを合成した画像の一部を示す。図2Bに示すように、ソファ及びローテーブル、タンス、並びに絵の各オブジェクトは、配置に応じた奥行き感が再現されているため、あたかもリビングルームに各オブジェクトを実際に配置して撮影を行ったかのように、背景画像に対して違和感なく表示されている。
First Embodiment
FIG. 1 is a block diagram showing a schematic configuration of an image processing apparatus according to a first embodiment of the present invention. FIGS. 2A and 2B are schematic views for explaining the image combining process performed by the image processing apparatus shown in FIG. Among these, FIG. 2A shows a floor plan of real estate. Also, FIG. 2B is an image obtained by combining a 3D model of a sofa, a coffee table, a closet, and a picture with a background image (heaven ball image) acquired by shooting in the living room (see FIG. 2A) of the real estate property. Show a part. As shown in FIG. 2B, since the sense of depth according to the arrangement has been reproduced for each object of the sofa, the low table, the chest, and the picture, it was as if each object was actually arranged and photographed in the living room Thus, the background image is displayed without a sense of incongruity.
 図1に示すように、画像処理装置10は、通信インタフェース11と、表示部12と、操作入力部13と、記憶部14と、プロセッサ15とを備える。このような画像処理装置10としては、汎用のコンピュータを用いることができる。 As shown in FIG. 1, the image processing apparatus 10 includes a communication interface 11, a display unit 12, an operation input unit 13, a storage unit 14, and a processor 15. As such an image processing apparatus 10, a general purpose computer can be used.
 通信インタフェース11は、画像処理装置10を通信ネットワークに接続し、通信ネットワークに接続された他の機器との間で情報の送受信を行う。通信インタフェース11は、例えばソフトモデム、ケーブルモデム、無線モデム、ADSLモデム、ONU(Optical Network Unit)等を用いて構成される。通信インタフェース11は、全天球カメラにより生成された画像データや、3Dモデルのデータ等を当該画像処理装置10に取り込むデータ取得部としても機能する。 The communication interface 11 connects the image processing apparatus 10 to a communication network, and transmits and receives information to and from other devices connected to the communication network. The communication interface 11 is configured using, for example, a soft modem, a cable modem, a wireless modem, an ADSL modem, an ONU (Optical Network Unit), or the like. The communication interface 11 also functions as a data acquisition unit that loads image data generated by an omnidirectional camera, data of a 3D model, and the like into the image processing apparatus 10.
 表示部12は、液晶又は有機EL(エレクトロルミネッセンス)によって形成された表示パネル及び駆動部を含むディスプレイである。 The display unit 12 is a display including a display panel formed by liquid crystal or organic EL (electroluminescence) and a drive unit.
 操作入力部13は、操作ボタン、キーボード、マウス等のポインティングデバイス、表示部12上に設けられたタッチセンサ等の入力デバイスであり、ユーザによりなされる操作を受け付け、該操作に応じた信号をプロセッサ15に入力する。 The operation input unit 13 is an operation button, a keyboard, a pointing device such as a mouse, and an input device such as a touch sensor provided on the display unit 12. The operation input unit 13 receives an operation performed by the user and processes a signal corresponding to the operation. Enter 15
 記憶部14は、例えばROMやRAMといった半導体メモリ等のコンピュータ読取可能な記憶媒体である。記憶部14は、プログラム記憶部141と、背景画像データ記憶部142と、間取図データ記憶部143と、3Dデータ記憶部144と、合成画像データ記憶部145とを含み、オペレーティングシステムプログラム及びドライバプログラム、各種機能を実行するアプリケーションプログラムや、これらのプログラムの実行中に使用される各種パラメータや画像データ等を記憶する。 The storage unit 14 is a computer-readable storage medium such as a semiconductor memory such as a ROM or a RAM. The storage unit 14 includes a program storage unit 141, a background image data storage unit 142, a floor plan data storage unit 143, a 3D data storage unit 144, and a composite image data storage unit 145. The operating system program and driver A program, an application program for executing various functions, various parameters used during execution of these programs, image data, etc. are stored.
 詳細には、プログラム記憶部141は、画像処理プログラム等の各種プログラムを記憶する。
 背景画像データ記憶部142は、不動産物件を撮影することにより背景画像として取得された天球画像の画像データ(以下、背景画像データともいう)を記憶する。
 間取図データ記憶部143は、不動産物件の間取図の画像データ(以下、間取図データともいう)を記憶する。
In detail, the program storage unit 141 stores various programs such as an image processing program.
The background image data storage unit 142 stores image data (hereinafter also referred to as background image data) of a celestial sphere image acquired as a background image by photographing a real estate property.
The floor plan data storage unit 143 stores image data of a floor plan of a real estate property (hereinafter also referred to as floor plan data).
 3Dデータ記憶部144は、不動産物件に仮想的に配置される家具等のオブジェクトの3Dモデルを表すデータ(以下、3Dデータともいう)、及び、関連するパラメータ(例えば、テクスチャを表すパラメータ等)を記憶する。3Dデータは、当該画像処理装置10において生成されたものであっても良いし、他の機器において生成されたものを通信ネットワークNを介して取得したものであっても良い。 The 3D data storage unit 144 includes data (hereinafter, also referred to as 3D data) representing a 3D model of an object such as furniture virtually disposed on a real estate, and related parameters (for example, parameters representing a texture, etc.) Remember. The 3D data may be generated by the image processing apparatus 10 or may be acquired by another apparatus through the communication network N.
 合成画像データ記憶部145は、背景画像に対して3Dモデルの像を合成した合成画像の画像データ(以下、合成画像データともいう)を記憶する。 The composite image data storage unit 145 stores image data (hereinafter also referred to as composite image data) of a composite image obtained by combining an image of a 3D model with a background image.
 プロセッサ15は、例えばCPU(Central Processing Unit)やGPU(Graphics Processing Unit)を用いて構成され、プログラム記憶部141に記憶された各種プログラムを読み込むことにより、画像処理装置10の各部を統括的に制御すると共に、不動産物件が写った背景画像に家具等のオブジェクトの画像を合成するための各種処理を実行する。詳細には、プロセッサ15が画像処理プログラムを実行することにより実現される画像処理部150の機能には、配置設定部151と、3D位置推定部152と、オブジェクトレイヤー生成部153と、影生成部154と、影レイヤー生成部155と、画像合成部156とが含まれる。 The processor 15 is configured using, for example, a central processing unit (CPU) or a graphics processing unit (GPU), and reads various programs stored in the program storage unit 141 to centrally control the respective units of the image processing apparatus 10. At the same time, various processes are performed to combine an image of an object such as furniture with a background image showing a real estate property. In detail, the functions of the image processing unit 150 realized by executing the image processing program by the processor 15 include the arrangement setting unit 151, the 3D position estimation unit 152, the object layer generation unit 153, and the shadow generation unit A shadow layer generation unit 155 and an image combining unit 156 are included.
 配置設定部151は、操作入力部13から入力される信号に従って、不動産物件に仮想的に配置されるオブジェクトの間取図における配置箇所を設定する。 The arrangement setting unit 151 sets an arrangement place in the floor plan of the object virtually arranged on the real estate according to the signal input from the operation input unit 13.
 3D位置推定部152は、天球状の背景画像によって囲まれ、撮影点を天球の中心点と対応づけた3次元グラフィックス空間(以下、3D空間ともいう)において、間取図におけるオブジェクトの配置箇所に対応する位置(以下、オブジェクト位置ともいう)を推定する。 The 3D position estimation unit 152 is an arrangement location of an object in a floor plan in a three-dimensional graphics space (hereinafter, also referred to as a 3D space) in which a shooting point is surrounded by a sky spherical background image and is associated with a shooting point. The position corresponding to (hereinafter, also referred to as an object position) is estimated.
 オブジェクトレイヤー生成部153は、オブジェクトの3Dモデルを3D空間内の上記オブジェクト位置に配置した場合に、3Dモデルを3D空間の中心から見た像を、背景画像と同じ天球面上に設けられるレイヤーにレンダリングする。即ち、3D空間の中心を投影中心として、3Dモデルを天球面上に投影した像を生成する。以下、3Dモデルの像がレンダリングされたレイヤーのことを、オブジェクトレイヤー(第1のレイヤー)という。 When the 3D model of an object is placed at the above object position in 3D space, the object layer generation unit 153 creates an image of the 3D model viewed from the center of the 3D space on a layer provided on the same celestial sphere as the background image. Render. That is, an image in which the 3D model is projected onto the celestial sphere is generated with the center of the 3D space as the projection center. Hereinafter, the layer on which the 3D model image is rendered is referred to as an object layer (first layer).
 影生成部154は、3D空間内の上記オブジェクト位置に配置された3Dモデルによって生じる影を生成する。詳細には、影生成部154は、背景画像に基づいて光源を設定するイメージベースドライティング(Image Based Lighting)処理を実行し、設定した光源に基づいて3Dモデルの影を生成する。イメージベースドライティングとは、実在する被写体を撮影した画像をライティングの色情報として使用し、シーンをレンダリングする手法のことである。 The shadow generation unit 154 generates a shadow generated by the 3D model placed at the object position in 3D space. In detail, the shadow generation unit 154 executes an image based lighting process of setting a light source based on a background image, and generates a shadow of a 3D model based on the set light source. Image-based lighting is a method of rendering a scene using an image obtained by photographing an existing subject as color information of lighting.
 影レイヤー生成部155は、影生成部154により生成された影を3D空間の中心から見た像を、背景画像と同じ天球面上に設けられるレイヤーにレンダリングする。即ち、3D空間の中心を投影中心として、3Dモデルの影を天球面上に投影した像を生成する。以下、影の像がレンダリングされたレイヤーのことを、影レイヤー(第2のレイヤー)という。 The shadow layer generation unit 155 renders an image obtained by viewing the shadow generated by the shadow generation unit 154 from the center of the 3D space on a layer provided on the same celestial sphere as the background image. That is, an image is generated in which the shadow of the 3D model is projected onto the sky sphere with the center of the 3D space as the projection center. Hereinafter, the layer in which the shadow image is rendered is referred to as a shadow layer (second layer).
 画像合成部156は、背景画像に影レイヤー及びオブジェクトレイヤーをこの順に重畳することにより、背景画像にオブジェクトの3Dモデルの像が合成された合成画像を生成する。 The image combining unit 156 generates a combined image in which the 3D model image of the object is combined with the background image by superimposing the shadow layer and the object layer on the background image in this order.
 次に、本実施形態に係る画像処理方法を説明する。
 本実施形態における画像処理の概要は以下のとおりである。即ち、合成対象の背景画像(天球画像)と、影用の透明な天球画像のレイヤー(影レイヤー)と、家具等のオブジェクトの3Dモデル用の透明な天球画像のレイヤー(オブジェクトレイヤー)との、計3層の天球画像を用意する。背景画像は、不動産物件の現場で全天球カメラなどを用いて撮影を行うことにより取得されたものである。なお、背景画像に重ねるレイヤーは上記2つのレイヤーに加え、適宜増やしても良い。
Next, an image processing method according to the present embodiment will be described.
The outline of the image processing in this embodiment is as follows. That is, a background image (sky image) to be composited, a layer of transparent sky image for shadow (shadow layer), and a layer of transparent sky image for 3D model of object such as furniture (object layer) Prepare a total of three layers of celestial sphere images. The background image is acquired by photographing at the site of a real estate property using an omnidirectional camera or the like. In addition to the above two layers, layers to be superimposed on the background image may be appropriately increased.
 次に、これらの天球画像によって囲まれる3次元グラフィックス空間(3D空間)内に、オブジェクトの3Dモデルを配置する。そして、3Dモデルに影をつけるため、イメージベーストライティングにより、背景画像に基づく光源を設定する。 Next, the 3D model of the object is placed in a 3D graphics space (3D space) surrounded by these celestial sphere images. Then, in order to cast a shadow on the 3D model, a light source based on a background image is set by image-based rating.
 次に、3D空間の原点(天球の中心)に、天球画像をレンダリングするためのカメラを配置する。言い換えると、3D空間の中心から3Dモデルやその影を見る視点(投影中心)を設定する。 Next, a camera for rendering the celestial sphere image is placed at the origin (center of the celestial sphere) of the 3D space. In other words, the viewpoint (projection center) to see the 3D model and its shadow from the center of the 3D space is set.
 次に、3D空間に3Dモデルを配置した場合に、背景画像に基づいて設定された光源により生じる影の像をレンダリングし、影レイヤーに貼り付ける。また、3D空間に3Dモデルを配置した場合の該3Dモデルの像をレンダリングし、オブジェクトレイヤーに貼り付ける。このような背景画像、影レイヤー、及びオブジェクトレイヤーを重畳することにより、背景画像にオブジェクトが違和感なく合成された合成画像を得ることができる。 Next, when a 3D model is placed in 3D space, an image of a shadow generated by a light source set based on a background image is rendered and pasted on a shadow layer. In addition, an image of the 3D model when the 3D model is arranged in the 3D space is rendered and attached to an object layer. By superimposing such a background image, a shadow layer, and an object layer, it is possible to obtain a composite image in which an object is composited seamlessly with the background image.
 以下、このような画像処理について、具体例を示しながら詳細に説明する。図3は、本実施形態に係る画像処理方法を示すフローチャートである。図4は、背景画像の被写体の一例である不動産物件の間取図である。本実施形態においては、図4に示す間取図上の撮影点P0に対応する位置に全天球カメラを設置して不動産物件内を撮影することにより、静止画又は動画の天球画像を生成し、この天球画像を背景画像として3Dモデルを合成するものとして説明する。 Hereinafter, such image processing will be described in detail with specific examples. FIG. 3 is a flowchart showing an image processing method according to the present embodiment. FIG. 4 is a floor plan of real estate, which is an example of the subject of the background image. In the present embodiment, an omnidirectional camera is installed at a position corresponding to the photographing point P 0 on the floor plan shown in FIG. It is assumed that this celestial sphere image is used as a background image to synthesize a 3D model.
 まず、ステップS10において、画像処理部150は、3Dモデルを合成するための背景画像を取得する。本実施形態において、画像処理部150は、背景画像データ記憶部142に記憶されている背景画像データを読み出すことにより背景画像を取得する。 First, in step S10, the image processing unit 150 acquires a background image for combining a 3D model. In the present embodiment, the image processing unit 150 acquires a background image by reading out the background image data stored in the background image data storage unit 142.
 続くステップS11において、画像処理部150は、背景画像に合成する3Dモデルを表すデータ(3Dデータ)を取得する。詳細には、画像処理部150は、3Dデータ記憶部144に3Dデータが格納されている3Dモデルを表す名称やアイコン等を表示部12に表示させ、ユーザに所望の3Dモデルを選択させる。そして、画像処理部150は、ユーザによる操作に応じて操作入力部13から入力された信号に従い、選択された3Dモデルを表す3Dデータを3Dデータ記憶部144から読み出す。 In the subsequent step S11, the image processing unit 150 acquires data (3D data) representing a 3D model to be combined with the background image. Specifically, the image processing unit 150 causes the display unit 12 to display a name, an icon, or the like representing a 3D model in which 3D data is stored in the 3D data storage unit 144, and allows the user to select a desired 3D model. Then, the image processing unit 150 reads 3D data representing the selected 3D model from the 3D data storage unit 144 in accordance with the signal input from the operation input unit 13 according to the operation by the user.
 続くステップS12において、画像処理部150は、背景画像に合成する3Dモデルに関するパラメータを取得する。詳細には、3Dモデルの各座標を曲線や曲面等の関数で表示するためのパラメータや、テクスチャマッピング等の処理で用いられるパラメータなどを3Dデータ記憶部144から読み出す。 In the subsequent step S12, the image processing unit 150 acquires parameters regarding the 3D model to be combined with the background image. Specifically, parameters for displaying each coordinate of the 3D model as a function such as a curve or a curved surface, parameters used in processing such as texture mapping are read out from the 3D data storage unit 144.
 続くステップS13において、画像処理部150は、図4に示す間取図におけるオブジェクトの配置箇所に対応する3D空間における3Dモデルの位置を推定する。 In the subsequent step S13, the image processing unit 150 estimates the position of the 3D model in the 3D space corresponding to the arrangement position of the object in the floor plan shown in FIG.
 図5は、ステップS13における位置推定処理を示すフローチャートである。また、図6~図8は、3Dモデルの位置推定処理を説明するための模式図である。このうち、図6は、三脚に取り付けた全天球カメラ20を床面上に設置した状態を示している。図7は、全天球カメラ20により不動産物件を撮影することにより得られた背景画像L1を、天球の中心点P0’を通る水平面でカットした状態を示している。図7に示す背景画像L1には、撮影点P0(図4参照)に設置された全天球カメラ20の視野に入る流し台、コンロ、壁、洋室のドア、和室の襖、浴室のドア、トイレのドア、玄関ドア等の像が写っている。図8は、同背景画像L1を、中心点P0’を通る垂直面でカットした状態を示している。図8に示す背景画像L1には、全天球カメラ20の視野に入る床面、壁、天井、玄関ドア等の像が写っている。 FIG. 5 is a flowchart showing the position estimation process in step S13. 6 to 8 are schematic diagrams for explaining the position estimation process of the 3D model. Among these, FIG. 6 shows a state where the omnidirectional camera 20 mounted on a tripod is installed on the floor surface. FIG. 7 shows a state in which the background image L1 obtained by photographing the real estate with the omnidirectional camera 20 is cut along a horizontal plane passing through the central point P 0 ′ of the celestial sphere. In the background image L1 shown in FIG. 7, a sink, a stove, a wall, a Western-style door, a Japanese-style rattan, a bathroom door, etc., which enter the field of view of the omnidirectional camera 20 installed at the shooting point P 0 (see FIG. 4). Images of the toilet door and the front door are visible. FIG. 8 shows the background image L1 cut in the vertical plane passing through the center point P 0 ′. In the background image L1 shown in FIG. 8, images of a floor surface, a wall, a ceiling, an entrance door and the like which are included in the field of view of the omnidirectional camera 20 are captured.
 本実施形態においては、不動産物件における各部(間取図に示す各部屋や備品等)の寸法及び撮影点P0の座標(平面座標及び撮影点の高さ)が既知であり、これらの寸法及び座標に基づいて、3D空間における3Dモデルの位置を推定する。なお、図6に示すように、撮影点P0の高さは、三脚を含めた全天球カメラ20の高さhとして取得することができる。 In the present embodiment, the respective parts in real estate dimensions and coordinates (height of the plane coordinates and photographed point) of the shooting point P 0 of the (each room and equipment such as shown in Mato view) is known, these dimensions and Based on the coordinates, estimate the position of the 3D model in 3D space. Incidentally, as shown in FIG. 6, the height of the imaging point P 0 can be obtained as the height h of the omnidirectional camera 20, including the tripod.
 まず、ステップS131において、配置設定部151は、間取図上でオブジェクトの配置を決定する。詳細には、配置設定部151は、不動産物件の間取図を表示部12に表示させ、家具等のオブジェクトを配置したい場所を間取図上でユーザに指定させる。そして、配置設定部151は、ユーザによる操作に応じて操作入力部13から入力された信号に従い、間取図におけるオブジェクトの位置情報(座標)を取得する。以下においては、図4に示す間取図上の点P1が、ユーザ所望のオブジェクトの配置箇所として決定されたものとする。 First, in step S131, the placement setting unit 151 determines the placement of objects on the floor plan. In detail, the arrangement setting unit 151 causes the display unit 12 to display a floor plan of a real estate property, and allows the user to specify a place where an object such as furniture is to be arranged on the floor plan. Then, the arrangement setting unit 151 acquires the position information (coordinates) of the object in the floor plan according to the signal input from the operation input unit 13 according to the operation by the user. In the following, a point P 1 on the chart summarizes shown in FIG. 4, and that determined as the arrangement position of the user-desired objects.
 続くステップS132において、3D位置推定部152は、背景画像L1を撮影した際の撮影点P0の位置を取得する。 In subsequent step S132, 3D position estimation unit 152 obtains the position of the imaging point P 0 at the time of photographing the background image L1.
 続くステップS133において、3D位置推定部152は、間取図に基づいて、背景画像L1によって囲まれる3D空間における3Dモデルの位置を導出する。詳細には、3D位置推定部152は、撮影点P0を天球の中心点P0’と対応づけた3D空間を作成し、間取図における各部の位置や寸法を3D空間の座標に対応づける。それにより、間取図における点P1に対応する3D空間内の点P1’の座標を取得することができる。
 このようにして3Dモデルの位置が推定されると、処理はメインルーチンに戻る。
In the subsequent step S133, the 3D position estimation unit 152 derives the position of the 3D model in the 3D space surrounded by the background image L1 based on the floor plan. Specifically, the 3D position estimation unit 152 creates a 3D space in which the shooting point P 0 is associated with the center point P 0 ′ of the celestial sphere, and associates the position and size of each part in the floor plan with the coordinates in 3D space. . Thereby, the coordinates of the point P 1 ′ in the 3D space corresponding to the point P 1 in the floor plan can be obtained.
When the position of the 3D model is thus estimated, the process returns to the main routine.
 再び図3を参照すると、ステップS13に続くステップS14において、画像処理部150は、ステップS13において推定された3D空間の位置に3Dモデルを配置する。図9は、図3に示す画像処理を説明するための模式図であり、背景画像L1によって囲まれる3D空間にソファの3Dモデルa11を配置した状態を示している。 Referring again to FIG. 3, in step S <b> 14 following step S <b> 13, the image processing unit 150 arranges the 3D model at the position of the 3D space estimated in step S <b> 13. FIG. 9 is a schematic diagram for explaining the image processing shown in FIG. 3, and shows a state in which the 3D model a11 of the sofa is disposed in the 3D space surrounded by the background image L1.
 続くステップS15において、影生成部154は、光源及びマテリアルを設定する。詳細には、影生成部154は、背景画像L1に基づいて、イメージベースドライティングにおけるライトプローブ画像を生成すると共に、3Dモデルのマテリアル(表面の反射特性など)を設定する。ここで、ライトプローブ画像とは、全方位における入射照明条件を記録するハイダイナミックレンジ画像のことである。 In the subsequent step S15, the shadow generation unit 154 sets a light source and a material. Specifically, the shadow generation unit 154 generates a light probe image in image-based writing based on the background image L1, and sets a material (such as surface reflection characteristics) of the 3D model. Here, the light probe image is a high dynamic range image recording incident illumination conditions in all directions.
 続くステップS16において、影生成部154は、ステップS15において設定された光源及びマテリアルに基づいて、3D空間に配置された3Dモデルの影を生成する。即ち、生成したライトプローブ画像を用いてレンダリングを行う。これにより、3D空間において3Dモデルを遮蔽物とした影が生成されると共に、3Dモデルの表面にも背景画像L1に基づくライティングがなされる。図9には、光源a12により生じた3Dモデルa11の影a13が示されている。 In the subsequent step S16, the shadow generation unit 154 generates a shadow of the 3D model arranged in the 3D space based on the light source and the material set in step S15. That is, rendering is performed using the generated light probe image. As a result, a shadow is generated in 3D space with the 3D model as a shield, and lighting based on the background image L1 is also made on the surface of the 3D model. FIG. 9 shows a shadow a13 of the 3D model a11 generated by the light source a12.
 続くステップS17において、影レイヤー生成部155は、3Dモデルの影の像がレンダリングされた影レイヤーを生成する。詳細には、影レイヤー生成部155は、ステップS16において生成された3Dモデルの影を、3D空間の中心点P0’を投影中心として、背景画像と同じ天球面上に位置する影レイヤーL2に投影する。図9には、背景画像L1の上層の影レイヤーL2に、影a13を投影した像の領域a14が示されている。なお、影レイヤーL2のうち、像の領域a14以外の領域は透明である。 In the subsequent step S17, the shadow layer generation unit 155 generates a shadow layer in which the shadow image of the 3D model is rendered. Specifically, the shadow layer generation unit 155 sets the shadow of the 3D model generated in step S16 to a shadow layer L2 located on the same celestial sphere as the background image with the central point P 0 ′ of the 3D space as the projection center. Project FIG. 9 shows an area a14 of an image obtained by projecting the shadow a13 on the shadow layer L2 in the upper layer of the background image L1. In the shadow layer L2, the area other than the area a14 of the image is transparent.
 続くステップS18において、オブジェクトレイヤー生成部153は、3Dモデルの像がレンダリングされたオブジェクトレイヤーを生成する。詳細には、オブジェクトレイヤー生成部153は、ステップS14において3D空間に配置され、さらに、ステップS16においてライティングされた3Dモデルを、3D空間の中心点P0’を投影中心として、背景画像と同じ天球面上に位置するオブジェクトレイヤーL3に投影する。図9には、影レイヤーL2の上層のオブジェクトレイヤーL3に、3Dモデルa11を投影した像の領域a15が示されている。なお、オブジェクトレイヤーL3のうち、像の領域a15以外の領域は透明である。 In the subsequent step S18, the object layer generation unit 153 generates an object layer in which the image of the 3D model is rendered. Specifically, the object layer generation unit 153 places the 3D model placed in the 3D space in step S14 and illuminated in step S16 on the same sky as the background image with the center point P 0 ′ of the 3D space as the projection center. It projects onto the object layer L3 located on the spherical surface. In FIG. 9, an area a15 of an image obtained by projecting the 3D model a11 is shown on the object layer L3 in the upper layer of the shadow layer L2. In the object layer L3, the area other than the area a15 of the image is transparent.
 続くステップS19において、画像合成部156は、背景画像L1、影レイヤーL2、及び、オブジェクトレイヤーL3をこの順に重畳した合成画像を生成する。それにより、背景画像におけるユーザ所望の位置に3Dモデルの像が合成された画像が得られる。 In the subsequent step S19, the image combining unit 156 generates a combined image in which the background image L1, the shadow layer L2, and the object layer L3 are superimposed in this order. Thereby, an image is obtained in which the image of the 3D model is synthesized at the user's desired position in the background image.
 このように生成された合成画像は、天球面を展開したパノラマ画像として表示しても良いし、VR用のコンテンツとして表示しても良い。 The composite image generated in this manner may be displayed as a panoramic image obtained by expanding the celestial sphere, or may be displayed as content for VR.
 以上説明したように、本発明の第1の実施形態によれば、背景画像として取得された天球画像によって囲まれる3D空間において、間取図上で指定されたユーザ所望のオブジェクトの配置箇所に対応する位置を推定し、この位置にオブジェクトの3Dモデルを配置して、3Dモデルの像を天球面にレンダリングするので、背景画像に対してオブジェクトを自在に、且つ、位置やサイズの不整合を生じさせることなく配置することができる。 As described above, according to the first embodiment of the present invention, in the 3D space surrounded by the celestial sphere image acquired as the background image, the location of the user-desired object designated on the floor plan is dealt with Position the 3D model of the object at this position and render the image of the 3D model on the sky sphere, causing the object to be freely against the background image and causing a position or size mismatch. It can be arranged without
 また、本発明の第1の実施形態によれば、背景画像に基づいて設定された光源により、3Dモデルの影を生成し、この影の像を天球面にレンダリングするので、違和感のない合成画像を生成することができる。 Further, according to the first embodiment of the present invention, a shadow of a 3D model is generated by a light source set based on a background image, and an image of the shadow is rendered on the sky sphere, so that a composite image without discomfort Can be generated.
 さらに、本発明の第1の実施形態によれば、背景画像に基づいて設定された光源により3Dモデルをライティングするので、よりリアルな合成画像を生成することができる。 Furthermore, according to the first embodiment of the present invention, since the 3D model is illuminated by the light source set based on the background image, a more realistic composite image can be generated.
(第2の実施形態)
 次に、本発明の第2の実施形態について説明する。
 本発明の第2の実施形態に係る画像処理装置の構成は全体として第1の実施形態(図1参照)と同様であり、3D位置推定部152が実行する3Dモデルの位置推定処理(図3のステップS13参照)が第1の実施形態と異なる。
Second Embodiment
Next, a second embodiment of the present invention will be described.
The configuration of the image processing apparatus according to the second embodiment of the present invention is the same as that of the first embodiment (see FIG. 1) as a whole, and 3D model position estimation processing (FIG. 3) performed by the 3D position estimation unit 152. Step S13) is different from the first embodiment.
 図10は、本発明の第2の実施形態における位置推定処理を示すフローチャートである。図11は、背景画像の被写体の一例である不動産物件の間取図である。図12A~図13Bは、本実施形態における位置推定処理を説明するための模式図である。このうち、図12Aは、全天球カメラ20により不動産物件を撮影している様子を示す側面図であり、図12Bは、該撮影によって得られた背景画像L1を、中心点P0’を通る垂直面でカットした状態を示している。図13Aは、全天球カメラ20により撮影を行っている様子を示す上面図であり、図13Bは、背景画像L1を、中心点P0’を通る水平面でカットした状態を示している。 FIG. 10 is a flow chart showing position estimation processing in the second embodiment of the present invention. FIG. 11 is a floor plan of real estate, which is an example of a subject of a background image. 12A to 13B are schematic diagrams for explaining the position estimation process in the present embodiment. Among these, FIG. 12A is a side view showing a state in which a real estate camera is photographed by the omnidirectional camera 20, and FIG. 12B passes through a central point P 0 ′ the background image L1 obtained by the photographing. It shows the state of cutting in the vertical plane. FIG. 13A is a top view showing that the image is taken by the omnidirectional camera 20, and FIG. 13B shows a state in which the background image L1 is cut along a horizontal plane passing through the center point P 0 ′.
 本実施形態においては、不動産物件の各部の寸法や撮影点P0の平面座標が不明である場合における位置推定処理を説明する。この場合、以下に説明するように、ユーザ所望のオブジェクトの配置箇所の近傍に位置する家具や設備等の被写体を参照し、この被写体の寸法に基づいて、オブジェクト位置を推定する。以下、参照用に用いる被写体のことを参照オブジェクトという。 In the present embodiment, illustrating the position estimation processing in the case the plane coordinate dimensions and shooting point P 0 of each part of the real estate is unknown. In this case, as described below, the object position is estimated based on the dimensions of the subject with reference to the subject such as furniture or equipment located near the arrangement location of the object desired by the user. Hereinafter, the subject used for reference is referred to as a reference object.
 まず、ステップS231において、配置設定部151は、間取図上でオブジェクトの配置を決定する。オブジェクトの配置の決定処理は、第1の実施形態と同様である(図5のステップS131参照)。以下においては、図11に示す間取図上の点P2が、オブジェクトの配置箇所として決定されたものとする。 First, in step S231, the placement setting unit 151 determines the placement of objects on the floor plan. The process of determining the arrangement of objects is the same as that of the first embodiment (see step S131 in FIG. 5). In the following, the point P 2 on the chart Mato shown in FIG. 11, and that determined as the arrangement position of the object.
 続くステップS232において、3D位置推定部152は、間取図におけるオブジェクトの配置箇所の近傍にある被写体を参照オブジェクトとして選択する。以下においては、図11に示す流し台が参照オブジェクト21として選択されたものとする。 In the following step S232, the 3D position estimation unit 152 selects a subject near the arrangement location of the object in the floor plan as a reference object. In the following, it is assumed that the sink shown in FIG. 11 is selected as the reference object 21.
 続くステップS233において、3D位置推定部152は、背景画像L1に写った参照オブジェクト21がいっぱいに収まる視野領域の垂直画角及び水平画角を取得する。詳細には、図12B及び図13Bに示すように、参照オブジェクト21が写った領域a21を背景画像L1から抽出し、垂直画角α及び水平画角βを測定する。垂直画角αについては、床面から全天球カメラ20までの高さhに相当する領域を測定する。 In the subsequent step S233, the 3D position estimation unit 152 acquires the vertical angle of view and the horizontal angle of view of the visual field area in which the reference object 21 captured in the background image L1 is fully contained. Specifically, as shown in FIGS. 12B and 13B, the area a21 where the reference object 21 is captured is extracted from the background image L1, and the vertical angle of view α and the horizontal angle of view β are measured. For the vertical angle of view α, an area corresponding to the height h from the floor surface to the omnidirectional camera 20 is measured.
 続くステップS234において、3D位置推定部152は、背景画像L1によって囲まれ、間取図における撮影点P0を天球の中心点P0’に対応づけた3D空間を生成する。 In the subsequent step S234, the 3D position estimation unit 152 generates a 3D space surrounded by the background image L1 and associating the shooting point P 0 in the floor plan with the center point P 0 ′ of the sky.
 続くステップS235において、3D位置推定部152は、3D空間における参照オブジェクト21の位置を推定する。なお、参照オブジェクト21の位置は、例えば参照オブジェクト21の中心点や撮影点P0から最短距離の点などの代表点によって表すことができる。具体例として、図12A及び図12Bに示すように、垂直画角αと撮影点P0の高さhとから、次式(1)により、撮影点P0から参照オブジェクト21の代表点までの距離d1を算出することができる。
   d1=h/tanα …(1)
In the subsequent step S235, the 3D position estimation unit 152 estimates the position of the reference object 21 in 3D space. The position of the reference object 21 may be, for example, from the center point and the shooting point P 0 of the reference object 21 represented by a representative point such points the shortest distance. As a specific example, as shown in FIGS. 12A and 12B, from the vertical view angle α and the height h of the shooting point P 0 according to the following equation (1), from the shooting point P 0 to the representative point of the reference object 21 The distance d 1 can be calculated.
d 1 = h / tan α (1)
 また、図13A及び図13Bに示すように、水平画角βと距離d1とから、水平方向における参照オブジェクト21の位置を算出することができる。例えば、ある基準位置(例えば全天球カメラ20の正面のライン)から参照オブジェクト21の代表点までの距離x4は、次式(2)により求めることができる。
   x4=d1・sin(β/2) …(2)
Further, as shown in FIGS. 13A and 13B, the horizontal angle β and the distance d 1 Tokyo, it is possible to calculate the position of the reference object 21 in the horizontal direction. For example, the distance x 4 from a certain reference position (e.g., in front of the omnidirectional camera 20 lines) to the representative point of the reference object 21 can be calculated by the following equation (2).
x 4 = d 1 · sin (β / 2) (2)
 続くステップS236において、3D位置推定部152は、3D空間における3Dモデルの位置を導出する。例えば、図11に示すように、参照オブジェクト21(流し台)にオブジェクトを並べるとした場合、3D空間においても、ステップS235において推定された参照オブジェクト21の位置の近傍(例えば、距離x4だけずらした位置)に3Dモデルを配置すれば良い。
 その後、処理はメインルーチンに戻る。
In the following step S236, the 3D position estimation unit 152 derives the position of the 3D model in 3D space. For example, as shown in FIG. 11, when the align objects to the reference object 21 (sink), also in the 3D space, near the position of the reference object 21 estimated in step S235 (e.g., shifted by a distance x 4 The 3D model should be placed at the position).
Thereafter, the process returns to the main routine.
 以上説明したように、本発明の第2の実施形態によれば、撮影点P0の座標が不明の場合であっても、撮影点P0の高さhさえ既知であれば、3D空間おける3Dモデルの位置を推定することが可能となる。 As described above, according to the second embodiment of the present invention, even in the case where the coordinates of the shooting point P 0 are unknown, if the height h of the shooting point P 0 is known, then in the 3D space It is possible to estimate the position of the 3D model.
 ここで、ARのように、リアルタイムに撮影中の画像に3Dモデルを重ねる場合、撮影点の位置が既知であるため、3D空間における3Dモデルの位置を比較的容易に推定することができる。しかしながら、本実施形態によれば、撮影点の位置が不明な背景画像を用いる場合であっても、3D空間における3Dモデルの位置を適切に設定することが可能となる。 Here, when the 3D model is superimposed on the image being captured in real time, as in AR, the position of the 3D model in 3D space can be estimated relatively easily because the position of the shooting point is known. However, according to the present embodiment, it is possible to appropriately set the position of the 3D model in 3D space even when using a background image whose position of the shooting point is unknown.
 なお、上記実施形態においては、不動産物件に備えられた家具や設備を参照オブジェクトとして用いることとしたが、その代わりに、予め寸法がわかっているマーカーを参照オブジェクトとして不動産物件内に配置し、撮影を行っても良い。この場合、3D空間における参照オブジェクトの位置をより簡単に推定することができる。 In the above embodiment, although furniture or equipment provided in the real estate property is used as a reference object, instead, a marker whose dimensions are known in advance is disposed as a reference object in the real estate property and photographed. You may In this case, the position of the reference object in the 3D space can be more easily estimated.
(第3の実施形態)
 次に、第3の実施形態について説明する。
 本発明の第3の実施形態に係る画像処理装置の構成は全体として第1の実施形態(図1参照)と同様であり、3D位置推定部152が実行する3Dモデルの位置推定処理が、第1の実施形態と異なる。
Third Embodiment
Next, a third embodiment will be described.
The configuration of the image processing apparatus according to the third embodiment of the present invention is the same as that of the first embodiment (see FIG. 1) as a whole, and the 3D position estimation process performed by the 3D position estimation unit 152 It differs from the first embodiment.
 図14は、本実施形態に係る画像処理方法を示すフローチャートである。図15は、背景画像の被写体の一例である不動産物件の間取図である。図16は、背景画像の一例を示す模式図であり、中心点P0’を通る水平面で背景画像L1をカットした状態を示している。本実施形態においては、不動産物件に配置された家具や内装などの被写体の距離(奥行き)や寸法の計測値に基づいて、不動産物件における撮影点P0を特定し、オブジェクト位置を推定する。 FIG. 14 is a flowchart showing an image processing method according to the present embodiment. FIG. 15 is a floor plan of real estate, which is an example of the subject of the background image. FIG. 16 is a schematic view showing an example of the background image, and shows a state in which the background image L1 is cut along a horizontal plane passing through the center point P 0 ′. In the present embodiment, based on the measurement value of the subject distance (depth) and dimensions of the furniture and interior disposed real estate, to identify the imaging point P 0 in the real estate, to estimate the object position.
 まず、ステップS30において、画像処理部150は、3Dモデルを合成するための背景画像を取得する。 First, in step S30, the image processing unit 150 acquires a background image for combining a 3D model.
 続くステップS31において、画像処理部150は、背景画像L1に基づき、撮影点P0から複数箇所の被写体までの距離及び被写体の寸法を計測し、計測データを保存する。被写体の距離及び寸法の計測方法は、上記第2の実施形態において説明したものと同様である(図12A~図13B参照)。距離及び寸法を計測する被写体の数は任意であるが、一例として、3箇所の被写体までの距離を計測することにより、水平面における撮影点P0の座標を特定することができる。 In subsequent step S31, the image processing unit 150, based on the background image L1, measures the size of the distance and the subject from the photographic point P 0 to the object at multiple locations, and stores the measurement data. The method of measuring the distance and size of the subject is the same as that described in the second embodiment (see FIGS. 12A to 13B). The number of subjects to measure the distance and size is arbitrary, as an example, by measuring the distances to the three subjects, it is possible to identify the coordinates of the photographing point P 0 in the horizontal plane.
 続くステップS32において、画像処理部150は、被写体の計測データを背景画像と関連づける。図16は、ステップS31において計測された距離d31~d38や、水平面における寸法x31、x32、y31、y32が、背景画像L1と関連付けられた状態を示している。垂直面における寸法についても同様に、背景画像に関連付けられる。 In the subsequent step S32, the image processing unit 150 associates measurement data of the subject with the background image. FIG. 16 shows a state in which the distances d 31 to d 38 measured in step S 31 and the dimensions x 31 , x 32 , y 31 and y 32 in the horizontal plane are associated with the background image L 1. The dimensions in the vertical plane are likewise associated with the background image.
 続くステップS33において、画像処理部150は、背景画像L1に合成する3Dモデルを表す3Dデータを取得する。また、ステップS34において、画像処理部150は、背景画像L1に合成する3Dモデルに関するパラメータを取得する。ステップS33、34の処理は、それぞれ、図3に示すステップS11、S12と同様である。 In the subsequent step S33, the image processing unit 150 acquires 3D data representing a 3D model to be combined with the background image L1. Further, in step S34, the image processing unit 150 acquires parameters related to the 3D model to be combined with the background image L1. The processes of steps S33 and S34 are the same as steps S11 and S12 shown in FIG. 3, respectively.
 続くステップS35において、画像処理部150は、間取図上でオブジェクトの配置を決定する。オブジェクトの配置の決定処理は、第1の実施形態と同様である(図5のステップS131参照)。以下においては、図15に示す間取図上の点P3が、オブジェクトの配置箇所として決定されたものとする。 In the subsequent step S35, the image processing unit 150 determines the arrangement of the object on the floor plan. The process of determining the arrangement of objects is the same as that of the first embodiment (see step S131 in FIG. 5). In the following, the point P 3 of the drawing Mato shown in FIG. 15, and that determined as the arrangement position of the object.
 続くステップS36において、画像処理部150は、背景画像L1を撮影した際の撮影点P0の位置を取得する。詳細には、3D位置推定部152が、ステップS31において計測された距離から(図15参照)、間取図における撮影点P0の座標を算出する。 In subsequent step S36, the image processing unit 150 acquires the position of the imaging point P 0 at the time of photographing the background image L1. Specifically, the 3D position estimation unit 152 calculates the coordinates of the shooting point P 0 in the floor plan from the distance measured in step S 31 (see FIG. 15).
 続くステップS37において、画像処理部150は、背景画像L1に関連付けられた計測データに基づいて、背景画像L1によって囲まれ、間取図における撮影点P0を天球の中心点P0’に対応づけた3D空間を生成する。 In the following step S37, the image processing unit 150 is surrounded by the background image L1 based on the measurement data associated with the background image L1, and associates the shooting point P 0 in the floor plan with the center point P 0 'of the sky. Create a 3D space.
 続くステップS38において、画像処理部150は、ステップS35において決定されたオブジェクトの配置に基づいて、3D空間に3Dモデルを配置する。詳細には、3D位置推定部152が、間取図における撮影点P0に対するオブジェクトの配置箇所(点P3)の座標を算出し、この配置箇所に対応する3D空間内の位置(図16の点P3’)を推定し、この位置に3Dモデルを配置する。 In the following step S38, the image processing unit 150 arranges the 3D model in the 3D space based on the arrangement of the objects determined in step S35. Specifically, the 3D position estimation unit 152 calculates the coordinates of the placement location (point P 3 ) of the object with respect to the shooting point P 0 in the floor plan, and the location in 3D space corresponding to the placement location (FIG. Estimate a point P 3 ′) and place a 3D model at this position.
 続くステップS39~S43における処理は、それぞれ、図3に示すステップS15~S19と同様である。 The processes in the following steps S39 to S43 are the same as the steps S15 to S19 shown in FIG. 3, respectively.
 以上説明したように、本発明の第3の実施形態によれば、複数箇所の被写体の距離及び寸法を計測するので、間取図における撮影点P0を精度良く求めることができる。従って、間取図上で決定されたオブジェクトの配置箇所に対応する3D空間内の位置をより精度良く推定することができる。 As described above, according to the third embodiment of the present invention, since measuring the distance and size of the object at multiple locations, the imaging point P 0 in the diagram taken between can be accurately obtained. Therefore, the position in the 3D space corresponding to the placement location of the object determined on the floor plan can be estimated more accurately.
 次に、本発明の第3の実施形態の変形例について説明する。
 上記第3の実施形態においては、被写体の距離及び寸法を背景画像から計測することとしたが(ステップS31参照)、距離及び寸法を撮影時に実測しても良い。詳細には、全天球カメラやスマートフォンに、距離計測用の補助ツールを取り付け、或いは、専用のアプリケーションを実行させて不動産物件の撮影を行うことにより、被写体まで距離や被写体の寸法を実測し、実測データを画像データに関連付けて保存しておく。この場合、不動産物件に対応する3D空間をより正確に再現することが可能となる。
Next, a modification of the third embodiment of the present invention will be described.
In the third embodiment, although the distance and the size of the subject are measured from the background image (see step S31), the distance and the size may be measured at the time of photographing. In detail, by attaching an auxiliary tool for distance measurement to the omnidirectional camera or smartphone, or by executing a dedicated application and shooting a real estate property, the distance to the subject and the dimensions of the subject are measured, Measured data is stored in association with image data. In this case, it is possible to more accurately reproduce the 3D space corresponding to the real estate property.
 以上説明した第1~第3の実施形態においては、イメージベースドライティングにより背景画像に基づいて光源を設定したが、光源を設定する手法はこれに限定されず、公知の種々の手法を適用することができる。例えば、背景画像に写った光源(窓、照明装置等)を3D空間内に光源オブジェクトとして配置し、大域照明計算(ラジオシティ法、フォトンマッピング法等)を行うことにより、3Dモデルの影を生成しても良い。 In the first to third embodiments described above, the light source is set based on the background image by image-based lighting, but the method of setting the light source is not limited to this, and various known methods may be applied. Can. For example, a light source (a window, a lighting device, etc.) shown in a background image is placed as a light source object in 3D space, and shadows of the 3D model are generated by performing global illumination calculation (radiosity method, photon mapping method, etc.) You may.
 図17は、本発明の第1~第3の実施形態に係る画像処理装置が適用されるシステムの構成例を示すネットワーク図である。図17に示すシステム1は、画像処理装置(サーバ)30と、不動産管理端末31と、オブジェクト管理端末32と、画像表示端末33とを備え、これらの機器が通信ネットワークNを介して接続されている。通信ネットワークNとしては、例えばインターネット回線や電話回線、LAN、専用線、移動体通信網、WiFi(Wireless Fidelity)、ブルートゥース(登録商標)等の通信回線、又はこれらの組み合わせが用いられる。通信ネットワークNは、有線、無線、又はこれらの組み合わせのいずれであっても良い。 FIG. 17 is a network diagram showing a configuration example of a system to which the image processing apparatus according to the first to third embodiments of the present invention is applied. The system 1 shown in FIG. 17 includes an image processing apparatus (server) 30, a real estate management terminal 31, an object management terminal 32, and an image display terminal 33, and these devices are connected via a communication network N. There is. As the communication network N, for example, an Internet line, a telephone line, a LAN, a dedicated line, a mobile communication network, a communication line such as WiFi (Wireless Fidelity), Bluetooth (registered trademark), or a combination thereof is used. The communication network N may be wired, wireless, or a combination of these.
 画像処理装置30は、演算処理能力の高いホストコンピュータによって構成され、システム1を統括的に管理するサーバとして機能すると共に、上記第1~第3の実施形態において説明した画像処理を実行する。画像処理装置30を構成するコンピュータは、必ずしも1台である必要はなく、通信ネットワークN上に分散する複数のコンピュータから構成されてもよい。 The image processing apparatus 30 is constituted by a host computer with high arithmetic processing capability, functions as a server that manages the system 1 in a centralized manner, and executes the image processing described in the first to third embodiments. The computer constituting the image processing apparatus 30 does not necessarily have to be one, and may be composed of a plurality of computers distributed on the communication network N.
 不動産管理端末31は、不動産賃貸物件や不動産売買物件に関する情報を管理する。詳細には、不動産管理端末31は、不動産物件の所在地、所有者、賃貸又は売買条件といった取引に関する情報の他、不動産物件に関する画像データ(間取図データ及び背景画像データ)を格納する。間取図データ及び背景画像データは、不動産管理端末31から画像処理装置30にアップロードされ、画像処理に用いられる。 The real estate management terminal 31 manages information on a real estate rental property and a real estate sale property. In detail, the real estate management terminal 31 stores image data (room arrangement data and background image data) regarding the real estate property as well as information regarding the transaction such as the location of the real estate property, the owner, and rental or trading conditions. The floor plan data and the background image data are uploaded from the real estate management terminal 31 to the image processing apparatus 30, and used for image processing.
 オブジェクト管理端末32は、背景画像に合成される家具等のオブジェクトに関する情報、即ち、オブジェクトを表す3Dモデルのデータ及びパラメータや、設定ファイルを作成すると共に、格納する。これらの情報は、オブジェクト管理端末32から画像処理装置30にアップロードされ、画像処理に用いられる。 The object management terminal 32 creates and stores information related to an object such as furniture to be combined with a background image, that is, data and parameters of a 3D model representing the object, and a setting file. These pieces of information are uploaded from the object management terminal 32 to the image processing apparatus 30 and used for image processing.
 画像表示端末33は、画像処理装置30により合成された画像を表示し、ユーザに閲覧させるための端末装置である。画像表示端末33は、2次元の静止画又は動画を表示することにより、3次元的な仮想空間(VR)をユーザに認識させる。画像表示端末33としては、ユーザの頭部に直接装着して用いられるゴーグル型の専用装置(所謂ヘッドマウントディスプレイ(HMD))を用いても良いし、汎用のスマートフォンに専用のアプリケーションをインストールしたものをゴーグル型のホルダーに取り付けて用いても良い。 The image display terminal 33 is a terminal device for displaying an image combined by the image processing device 30 and causing the user to view it. The image display terminal 33 causes the user to recognize a three-dimensional virtual space (VR) by displaying a two-dimensional still image or a moving image. As the image display terminal 33, a goggle type dedicated device (so-called head mounted display (HMD)) used by being directly attached to the head of the user may be used, or a dedicated application installed on a general-purpose smartphone May be attached to a goggle type holder and used.
 画像表示端末33の内部には、ユーザの左右の眼に対応する位置に2つのレンズがそれぞれ取り付けられている。VRを再生する際には、画面に設けられた左右の2つの領域に、視差が設けられた2つの画像をそれぞれ表示させる。ユーザは、2つのレンズを通して左右の眼で2つの画像をそれぞれ見ることにより、画像を3次元的に認識(立体視)することができる。 Inside the image display terminal 33, two lenses are respectively attached at positions corresponding to the left and right eyes of the user. At the time of reproducing the VR, two images provided with parallax are respectively displayed in the two left and right areas provided on the screen. The user can recognize an image three-dimensionally (stereoscopically) by viewing the two images with the left and right eyes respectively through the two lenses.
 もちろん、画像表示端末33としては、HMDに限定されず、タブレット端末や据置き型のディスプレイを用いても良い。この場合、背景画像に家具等のオブジェクトの3次元モデルを合成した画像を、パノラマ画像として表示しても良い。 Of course, the image display terminal 33 is not limited to the HMD, and a tablet terminal or a stationary display may be used. In this case, an image obtained by combining a three-dimensional model of an object such as furniture with a background image may be displayed as a panoramic image.
 このようなシステム1において、画像処理装置30は、不動産管理端末31からアップロードされた背景画像の画像データと、オブジェクト管理端末32からアップロードされた3Dモデルのデータ等とを用いて、背景画像に3Dモデルが合成された合成画像を生成し、合成画像の画像データを画像表示端末33に送信する。それにより、画像表示端末33に合成画像を表示させることができる。 In such a system 1, the image processing device 30 uses the image data of the background image uploaded from the real estate management terminal 31, the data of the 3D model uploaded from the object management terminal 32, etc. A composite image in which the model is composited is generated, and image data of the composite image is transmitted to the image display terminal 33. Thus, the composite image can be displayed on the image display terminal 33.
 なお、システム1においては、通信ネットワークNに種々の端末装置をさらに接続しても良い。例えば、ユーザに背景画像やこれに合成するオブジェクトを選択させたり、間取図におけるオブジェクトの配置箇所を指定させたりするために用いられる端末装置を別途設けても良い。また、画像表示端末33を複数設け、これらの画像表示端末33において同じコンテンツを同時に鑑賞できるようにしても良い。 In the system 1, various terminal devices may be further connected to the communication network N. For example, a terminal device may be separately provided which is used to cause the user to select a background image or an object to be combined with the background image, or to specify an arrangement position of the object in the floor plan. Further, a plurality of image display terminals 33 may be provided so that the same content can be simultaneously viewed on these image display terminals 33.
 図18は、本発明の第1~第3の実施形態に係る画像処理装置が適用されるシステムの別の構成例を示す模式図である。図18において、太字の実線の矢印はダウンロードされるデータの流れを表し、破線の矢印はアップロードされるデータの流れを表す。 FIG. 18 is a schematic view showing another configuration example of a system to which the image processing apparatus according to the first to third embodiments of the present invention is applied. In FIG. 18, bold solid arrows represent the flow of data to be downloaded, and dashed arrows represent the flow of data to be uploaded.
 図18に示すシステム2は、画像処理装置(エディタ)40と、画像管理サーバ41と、オブジェクト管理装置42と、コンバータ43と、サービス管理装置44と、画像表示端末45とを備える。これらの各機器は、通信ネットワークを介して互いに接続されている。 The system 2 shown in FIG. 18 includes an image processing apparatus (editor) 40, an image management server 41, an object management apparatus 42, a converter 43, a service management apparatus 44, and an image display terminal 45. These respective devices are connected to one another via a communication network.
 画像処理装置(エディタ)40は、第1~第3の実施形態において説明した画像処理を実行することにより、背景画像を加工したり編集したりするエディタとして機能する。
 画像管理サーバ41は、システム2において用いられる画像を管理する。
The image processing apparatus (editor) 40 functions as an editor for processing or editing a background image by executing the image processing described in the first to third embodiments.
The image management server 41 manages images used in the system 2.
 オブジェクト管理装置42は、家具や調度品等のオブジェクトの3Dモデルを表すデータ及びテクスチャ等の関連情報(以下、これらをまとめて3Dファイルという)、並びに、家具等の配置情報が記述された設定ファイルを作成し又は外部から取り込んで格納する。なお、図18に示す「FBX」とは、3Dモデルを表すデータのファイル形式の一例である。ただし、本システムにおいて使用可能なファイル形式はこれに限定されない。 The object management device 42 is a setting file in which data representing a 3D model of an object such as furniture or furniture and related information such as a texture (hereinafter collectively referred to as a 3D file) and location information such as furniture are described Create or store from an external source. Note that “FBX” illustrated in FIG. 18 is an example of a file format of data representing a 3D model. However, the file format usable in this system is not limited to this.
 コンバータ43は、3Dモデルを表すデータを、画像処理装置40に読み込み可能であり、且つ、テクスチャ等の関連情報とまとめたファイルに変換する。 The converter 43 converts data representing the 3D model into a file that can be read into the image processing device 40 and is associated with related information such as a texture.
 サービス管理装置44は、間取図データや背景画像データを格納する記憶部を備えると共に、間取図や背景画像を表示可能な画面を備え、間取図を画面に表示して家具等のオブジェクトの配置箇所をユーザに指定させたり、背景画像44aやオブジェクトの3Dモデルが合成された合成画像44bを画面に表示してユーザに提示したりするために用いられる。 The service management device 44 includes a storage unit for storing floor plan data and background image data, and also includes a screen capable of displaying a floor plan and a background image, and displays the floor plan on the screen to display an object such as furniture. It is used to allow the user to specify the placement position of the image, or to display on the screen the composite image 44b in which the background image 44a and the 3D model of the object are composited and to present it to the user.
 画像表示端末45の構成は図17に示す画像表示端末33と同様であり、3Dモデルが背景画像に合成された合成画像をVRとしてユーザに閲覧させる際に用いられる。 The configuration of the image display terminal 45 is the same as that of the image display terminal 33 shown in FIG. 17, and is used when a user views a composite image in which a 3D model is combined with a background image as a VR.
 このようなシステム2においては、まず、オブジェクト管理装置42が、家具等のオブジェクトの3Dモデルのデータを画像管理サーバ41にアップロードする。コンバータ43は、3Dモデルのデータを画像管理サーバ41からダウンロードし、3Dモデルやテクスチャ等をまとめたファイルに変換して、画像管理サーバ41にアップロードする。画像処理装置40は、このように変換されたファイルを画像管理サーバ41からダウンロードすると共に、設定ファイルをオブジェクト管理装置42からダウンロードし、さらに、背景画像データをサービス管理装置44からダウンロードする。そして、画像処理装置40は、3Dモデルを背景画像に合成するための各処理(照明の設定、マテリアルの調整など)を行い、作成した合成画像データのファイルを、サービス管理装置44にアップロードする。画像表示端末45は、合成画像データのファイルをサービス管理装置44からダウンロードし、該ファイルに基づいて合成画像、即ち、ホームステージングのコンテンツを再生する。 In such a system 2, first, the object management device 42 uploads data of a 3D model of an object such as furniture to the image management server 41. The converter 43 downloads 3D model data from the image management server 41, converts the 3D model, a texture, and the like into a file, and uploads the file to the image management server 41. The image processing apparatus 40 downloads the file converted in this way from the image management server 41, downloads the setting file from the object management apparatus 42, and further downloads the background image data from the service management apparatus 44. Then, the image processing device 40 performs each process (setting of illumination, adjustment of material, etc.) for combining the 3D model with the background image, and uploads the file of the created combined image data to the service management device 44. The image display terminal 45 downloads a file of composite image data from the service management apparatus 44, and reproduces a composite image, that is, home staging content based on the file.
 本発明は、上記第1~第3の実施形態及び変形例に限定されるものではなく、上記第1~第3の実施形態に開示されている複数の構成要素を適宜組み合わせることによって、種々の発明を形成することができる。例えば、上記第1~第3の実施形態及び変形例に示した全構成要素からいくつかの構成要素を除外して形成しても良いし、上記第1~第3の実施形態及び変形例に示した構成要素を適宜組み合わせて形成しても良い。 The present invention is not limited to the above first to third embodiments and modifications, and various combinations of components disclosed in the above first to third embodiments can be made as appropriate. The invention can be formed. For example, it may be formed by excluding some components from all the components shown in the first to third embodiments and the modification, or in the first to third embodiments and the modification. The components shown may be combined appropriately.
 1、2 システム
 10 画像処理装置
 11 通信インタフェース
 12 表示部
 13 操作入力部
 14 記憶部
 15 プロセッサ
 20 全天球カメラ
 21 参照オブジェクト
 30 画像処理装置(サーバ)
 31 不動産管理端末
 32 オブジェクト管理端末
 33 画像表示端末
 40 画像処理装置(エディタ)
 41 画像管理サーバ
 42 オブジェクト管理装置
 43 コンバータ
 44 サービス管理装置
 45 画像表示端末
 141 プログラム記憶部
 142 背景画像データ記憶部
 143 間取図データ記憶部
 144 3Dデータ記憶部
 145 合成画像データ記憶部
 150 画像処理部
 151 配置設定部
 152 3D位置推定部
 153 オブジェクトレイヤー生成部
 154 影生成部
 155 影レイヤー生成部
 156 画像合成部
1, 2 system 10 image processing apparatus 11 communication interface 12 display section 13 operation input section 14 storage section 15 processor 20 all-sky camera 21 reference object 30 image processing apparatus (server)
31 Real Estate Management Terminal 32 Object Management Terminal 33 Image Display Terminal 40 Image Processing Device (Editor)
41 image management server 42 object management device 43 converter 44 service management device 45 image display terminal 141 program storage unit 142 background image data storage unit 143 floor plan data storage unit 144 3D data storage unit 145 composite image data storage unit 150 image processing unit 151 placement setting unit 152 3D position estimation unit 153 object layer generation unit 154 shadow generation unit 155 shadow layer generation unit 156 image combining unit

Claims (9)

  1.  不動産物件が写った背景画像に対し、前記不動産物件に仮想的に配置されるオブジェクトの画像を合成する画像処理装置であって、
     前記不動産物件の間取図上における前記オブジェクトの配置箇所を設定する配置設定部と、
     前記不動産物件を撮影することにより背景画像として取得された天球画像によって囲まれる3次元グラフィックス空間において、前記間取図上における前記オブジェクトの配置箇所に対応する位置であるオブジェクト位置を推定する3D位置推定部と、
     前記オブジェクト位置に前記オブジェクトの3次元モデルを配置することによって生じる影を生成する影生成部と、
     前記3次元グラフィックス空間の中心を投影中心として、前記オブジェクト位置に配置された前記3次元モデルを前記背景画像と同じ天球面上に投影した像をレンダリングすることにより、第1のレイヤーを生成するオブジェクトレイヤー生成部と、
     前記3次元グラフィックス空間の中心を投影中心として、前記影を前記天球面上に投影した像をレンダリングすることにより、第2のレイヤーを生成する影レイヤー生成部と、
     前記背景画像に前記第2のレイヤー及び前記第1のレイヤーを合成する画像合成部と、
    を備える画像処理装置。
    An image processing apparatus that synthesizes an image of an object virtually placed on the real estate property against a background image of the real estate property,
    A placement setting unit configured to set the placement location of the object on the floor plan of the real estate property;
    A 3D position for estimating an object position which is a position corresponding to an arrangement position of the object on the floor plan in a three-dimensional graphics space surrounded by an astronomical image acquired as a background image by photographing the real estate property An estimation unit,
    A shadow generation unit that generates a shadow generated by arranging a three-dimensional model of the object at the object position;
    A first layer is generated by rendering an image in which the three-dimensional model disposed at the object position is projected onto the same celestial sphere as the background image with the center of the three-dimensional graphics space as the projection center. An object layer generation unit,
    A shadow layer generation unit that generates a second layer by rendering an image in which the shadow is projected onto the celestial sphere with the center of the three-dimensional graphics space as the projection center;
    An image combining unit that combines the second layer and the first layer with the background image;
    An image processing apparatus comprising:
  2.  前記間取図の画像を表示する表示部と、
     ユーザによりなされる操作を受け付ける操作入力部と、
    をさらに備え、
     前記配置設定部は、前記操作入力部が受け付けた、前記表示部に表示された前記間取図の画像に対する操作に基づいて、前記配置箇所を設定する、請求項1に記載の画像処理装置。
    A display unit for displaying an image of the floor plan;
    An operation input unit that receives an operation performed by a user;
    And further
    The image processing apparatus according to claim 1, wherein the arrangement setting unit sets the arrangement position based on an operation on the image of the floor plan displayed on the display unit received by the operation input unit.
  3.  前記影生成部は、前記背景画像に基づいて光源を設定し、該光源に基づいて前記3次元モデルの影を生成する、請求項1又は2に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the shadow generation unit sets a light source based on the background image, and generates a shadow of the three-dimensional model based on the light source.
  4.  前記3D位置推定部は、前記間取図における各部の寸法に基づいて、前記オブジェクト位置を推定する、請求項1~3のいずれか1項に記載の画像処理装置。 The image processing apparatus according to any one of claims 1 to 3, wherein the 3D position estimation unit estimates the object position based on the dimensions of each part in the floor plan.
  5.  前記3D位置推定部は、前記配置箇所の近傍に位置する被写体の距離及び寸法に基づいて、前記オブジェクト位置を推定する、請求項1~3のいずれか1項に記載の画像処理装置。 The image processing apparatus according to any one of claims 1 to 3, wherein the 3D position estimation unit estimates the object position based on a distance and a dimension of a subject located near the arrangement location.
  6.  前記3D位置推定部は、前記背景画像から計測された被写体の距離及び寸法に基づいて撮影点を取得し、該撮影点に基づいて前記オブジェクト位置を推定する、請求項1~3のいずれか1項に記載の画像処理装置。 The said 3D position estimation part acquires an imaging | photography point based on the distance and the dimension of the to-be-measured measured from the said background image, and estimates the said object position based on this imaging | photography point. An image processing apparatus according to claim 1.
  7.  前記3D位置推定部は、実測された被写体の距離及び寸法に基づいて撮影点を取得し、該撮影点に基づいて前記オブジェクト位置を推定する、請求項1~3のいずれか1項に記載の画像処理装置。 The said 3D position estimation part acquires an imaging | photography point based on the distance and dimension of the to-be-measured object, and estimates the said object position based on this imaging | photography point. Image processing device.
  8.  不動産物件が写った背景画像に対し、前記不動産物件に仮想的に配置されるオブジェクトの画像を合成する画像処理方法であって、
     前記不動産物件の間取図上における前記オブジェクトの配置箇所を設定するステップと、
     前記不動産物件を撮影することにより背景画像として取得された天球画像によって囲まれる3次元グラフィックス空間において、前記間取図上における前記オブジェクトの配置箇所に対応する位置であるオブジェクト位置を推定するステップと、
     前記オブジェクト位置に前記オブジェクトの3次元モデルを配置することによって生じる影を生成するステップと、
     前記3次元グラフィックス空間の中心を投影中心として、前記オブジェクト位置に配置された前記3次元モデルを前記背景画像と同じ天球面上に投影した像をレンダリングすることにより、第1のレイヤーを生成するステップと、
     前記3次元グラフィックス空間の中心を投影中心として、前記影を前記天球面上に投影した像をレンダリングすることにより、第2のレイヤーを生成するステップと、
     前記背景画像に前記第2のレイヤー及び前記第1のレイヤーを合成するステップと、
    を含む画像処理方法。
    An image processing method for synthesizing an image of an object virtually placed on the real estate property against a background image on which the real estate property is captured,
    Setting the location of the object on the floor plan of the real estate property;
    Estimating an object position which is a position corresponding to an arrangement position of the object on the floor plan in a three-dimensional graphics space surrounded by an astronomical image acquired as a background image by photographing the real estate property; ,
    Generating a shadow resulting from placing a three-dimensional model of the object at the object position;
    A first layer is generated by rendering an image in which the three-dimensional model disposed at the object position is projected onto the same celestial sphere as the background image with the center of the three-dimensional graphics space as the projection center. Step and
    Generating a second layer by rendering an image in which the shadow is projected onto the celestial sphere with the center of the three-dimensional graphics space as the projection center;
    Combining the second layer and the first layer with the background image;
    Image processing method including:
  9.  不動産物件が写った背景画像に対し、前記不動産物件に仮想的に配置されるオブジェクトの画像を合成する画像処理プログラムであって、
     前記不動産物件の間取図上における前記オブジェクトの配置箇所を設定するステップと、
     前記不動産物件を撮影することにより背景画像として取得された天球画像によって囲まれる3次元グラフィックス空間において、前記間取図上における前記オブジェクトの配置箇所に対応する位置であるオブジェクト位置を推定するステップと、
     前記オブジェクト位置に前記オブジェクトの3次元モデルを配置することによって生じる影を生成するステップと、
     前記3次元グラフィックス空間の中心を投影中心として、前記オブジェクト位置に配置された前記3次元モデルを前記背景画像と同じ天球面上に投影した像をレンダリングすることにより、第1のレイヤーを生成するステップと、
     前記3次元グラフィックス空間の中心を投影中心として、前記影を前記天球面上に投影した像をレンダリングすることにより、第2のレイヤーを生成するステップと、
     前記背景画像に前記第2のレイヤー及び前記第1のレイヤーを合成するステップと、
    をコンピュータに実行させる画像処理プログラム。
     
     
     
    An image processing program for synthesizing an image of an object virtually placed on the real estate property with respect to a background image showing the real estate property,
    Setting the location of the object on the floor plan of the real estate property;
    Estimating an object position which is a position corresponding to an arrangement position of the object on the floor plan in a three-dimensional graphics space surrounded by an astronomical image acquired as a background image by photographing the real estate property; ,
    Generating a shadow resulting from placing a three-dimensional model of the object at the object position;
    A first layer is generated by rendering an image in which the three-dimensional model disposed at the object position is projected onto the same celestial sphere as the background image with the center of the three-dimensional graphics space as the projection center. Step and
    Generating a second layer by rendering an image in which the shadow is projected onto the celestial sphere with the center of the three-dimensional graphics space as the projection center;
    Combining the second layer and the first layer with the background image;
    An image processing program that causes a computer to execute.


PCT/JP2018/040919 2017-11-04 2018-11-02 Image processing device, image processing method and image processing program WO2019088273A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2019523897A JP6570161B1 (en) 2017-11-04 2018-11-02 Image processing apparatus, image processing method, and image processing program

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2017-213359 2017-11-04
JP2017213359 2017-11-04
JP2018044281 2018-03-12
JP2018-044281 2018-03-12

Publications (1)

Publication Number Publication Date
WO2019088273A1 true WO2019088273A1 (en) 2019-05-09

Family

ID=66331929

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/040919 WO2019088273A1 (en) 2017-11-04 2018-11-02 Image processing device, image processing method and image processing program

Country Status (2)

Country Link
JP (1) JP6570161B1 (en)
WO (1) WO2019088273A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313814A (en) * 2021-05-20 2021-08-27 广州美术学院 Indoor design system and method based on reverse modeling and AR technology
JP6961157B1 (en) * 2021-05-25 2021-11-05 株式会社x garden Information processing system, information processing method and program
WO2022101707A1 (en) * 2020-11-11 2022-05-19 Ricoh Company, Ltd. Image processing method, recording medium, and image processing system
WO2022209564A1 (en) * 2021-03-30 2022-10-06 グリー株式会社 Information processing system, information processing method, and information processing program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006053694A (en) * 2004-08-10 2006-02-23 Riyuukoku Univ Space simulator, space simulation method, space simulation program and recording medium
JP2015142320A (en) * 2014-01-30 2015-08-03 株式会社バンダイナムコエンターテインメント Imaging printing system, server system and program
JP2017146762A (en) * 2016-02-17 2017-08-24 株式会社Acw−Deep Image display type simulation service providing system and image display type simulation service providing method
WO2017171005A1 (en) * 2016-04-01 2017-10-05 株式会社wise 3-d graphic generation, artificial intelligence verification and learning system, program, and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006053694A (en) * 2004-08-10 2006-02-23 Riyuukoku Univ Space simulator, space simulation method, space simulation program and recording medium
JP2015142320A (en) * 2014-01-30 2015-08-03 株式会社バンダイナムコエンターテインメント Imaging printing system, server system and program
JP2017146762A (en) * 2016-02-17 2017-08-24 株式会社Acw−Deep Image display type simulation service providing system and image display type simulation service providing method
WO2017171005A1 (en) * 2016-04-01 2017-10-05 株式会社wise 3-d graphic generation, artificial intelligence verification and learning system, program, and method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022101707A1 (en) * 2020-11-11 2022-05-19 Ricoh Company, Ltd. Image processing method, recording medium, and image processing system
WO2022209564A1 (en) * 2021-03-30 2022-10-06 グリー株式会社 Information processing system, information processing method, and information processing program
JP7449523B2 (en) 2021-03-30 2024-03-14 グリー株式会社 Information processing system, information processing method, information processing program
CN113313814A (en) * 2021-05-20 2021-08-27 广州美术学院 Indoor design system and method based on reverse modeling and AR technology
JP6961157B1 (en) * 2021-05-25 2021-11-05 株式会社x garden Information processing system, information processing method and program
JP2022181131A (en) * 2021-05-25 2022-12-07 株式会社x garden Information processing system, information processing method and program

Also Published As

Publication number Publication date
JPWO2019088273A1 (en) 2019-11-14
JP6570161B1 (en) 2019-09-04

Similar Documents

Publication Publication Date Title
US10587864B2 (en) Image processing device and method
US10873741B2 (en) Image processing apparatus and method
US20240112430A1 (en) Techniques for capturing and displaying partial motion in virtual or augmented reality scenes
JP6570161B1 (en) Image processing apparatus, image processing method, and image processing program
US9420253B2 (en) Presenting realistic designs of spaces and objects
US11640672B2 (en) Method and system for wireless ultra-low footprint body scanning
KR101980261B1 (en) System and method for furniture placement simulation using augmented reality and computer program for the same
US20180144237A1 (en) System and method for body scanning and avatar creation
US10085008B2 (en) Image processing apparatus and method
US7965304B2 (en) Image processing method and image processing apparatus
JP4804256B2 (en) Information processing method
Střelák et al. Examining user experiences in a mobile augmented reality tourist guide
JP6669063B2 (en) Image processing apparatus and method
US20160078663A1 (en) Cloud server body scan data system
US20120095589A1 (en) System and method for 3d shape measurements and for virtual fitting room internet service
JP2022077148A (en) Image processing method, program, and image processing system
Ozacar et al. A low-cost and lightweight 3D interactive real estate-purposed indoor virtual reality application
US20150138199A1 (en) Image generating system and image generating program product
WO2018182938A1 (en) Method and system for wireless ultra-low footprint body scanning
JP7476511B2 (en) Image processing system, image processing method and program
JP6679966B2 (en) Three-dimensional virtual space presentation system, three-dimensional virtual space presentation method and program
JP7445348B1 (en) Information processing device, method, program, and system

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019523897

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18874822

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18874822

Country of ref document: EP

Kind code of ref document: A1