WO2022004394A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2022004394A1
WO2022004394A1 PCT/JP2021/022935 JP2021022935W WO2022004394A1 WO 2022004394 A1 WO2022004394 A1 WO 2022004394A1 JP 2021022935 W JP2021022935 W JP 2021022935W WO 2022004394 A1 WO2022004394 A1 WO 2022004394A1
Authority
WO
WIPO (PCT)
Prior art keywords
screen
shape
information
content
information processing
Prior art date
Application number
PCT/JP2021/022935
Other languages
French (fr)
Japanese (ja)
Inventor
誠史 友永
洋 今村
孝至 高松
徹 長良
浩二 長田
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2022004394A1 publication Critical patent/WO2022004394A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/74Projection arrangements for image reproduction, e.g. using eidophor

Definitions

  • This technology relates to information processing devices, information processing methods and programs, and more specifically to information processing devices that provide a good image viewing experience in a system that projects images on a screen.
  • the interior of a car may be equipped with a 6 to 15 inch liquid crystal display or an organic EL (Electronic Luminescent) display for the purpose of acquiring information and enjoying it.
  • the images to be viewed on this display are, for example, TV programs, movies, navigation information, destinations, stop-by information, and the like. Due to its structure, the image display by this display is limited to the display of a flat surface or a loose curved surface shape.
  • Patent Document 1 discloses a system for projecting an image from a projector (projection unit) on a side window.
  • the purpose of this technology is to provide a good video viewing experience in a system that projects video on a screen.
  • the concept of this technology is Based on the first process of generating a display image based on the content and projecting the generated display image onto a screen arranged in a predetermined space, and the viewing environment information of the content or the image projected on the screen.
  • the information processing apparatus includes a control unit that controls a second process of determining the screen shape and transforming the screen shape into the determined screen shape.
  • the present technology includes a control unit that controls the first process and the second process.
  • the first process is a process of generating a display image based on the content and projecting the generated display image onto a screen arranged in a predetermined space.
  • the predetermined space is an interior space of the car, and the screen may be arranged on the ceiling portion of the car. By arranging the screen on the ceiling part, it is possible to effectively use the wide ceiling part.
  • the second process is a process of determining the screen shape based on the viewing environment information of the content or the image projected on the screen, and transforming the screen shape into the determined screen shape.
  • the shape of the screen may be deformed by a deformation mechanism arranged between the ceiling portion of the car and the screen.
  • the deformation mechanism prevents inconveniences such as spoiling the aesthetic appearance of the inside of the car.
  • the screen shape may be determined based on the content of the image projected on the screen.
  • the screen shape may be determined based on the content of the projected image.
  • the metadata associated with the content and storing the screen shape information may be acquired, and the screen shape may be determined based on the acquired metadata.
  • the screen shape can be easily and appropriately determined to match the content.
  • the screen shape information may be composed of one or a plurality of information sets including the section and the screen shape information.
  • the screen shape can be appropriately determined for each section of the content.
  • the screen shape information may be made to indicate the shape of the entire area (entire surface) of the screen or the shape of each divided area of the screen. This makes it possible to appropriately determine the screen shape in all areas or in each divided area.
  • the metadata may be acquired from a content server for acquiring the content or a metadata server different from the content server via the network.
  • the metadata associated with the content can be acquired from the external server and used.
  • the screen shape information stored in the metadata may be set based on the history of the screen shape used by a large number of people when projecting an image based on the content on the screen in the content server or the metadata server.
  • the screen shape can be determined to be a good shape used by a large number of people.
  • the viewing environment information may include information on a plurality of viewing positions for viewing the image projected on the screen and the number of viewers at each viewing position.
  • the viewing environment information includes information on a plurality of viewing positions and the number of viewers at each viewing position, so that the number of viewers at a plurality of viewing positions and each viewing position can be increased.
  • the screen shape can be appropriately determined accordingly.
  • the screen shape may be determined so as to match the viewing position where the number of viewers is the largest.
  • the screen-projected image can be viewed well at the viewing position where the number of viewers is the largest.
  • the predetermined section is the space inside the car
  • the screen is arranged on the ceiling of the car
  • the viewing environment information is the seat position in the front-rear direction of the car and the number of occupants in each seat position. It may contain information.
  • the screen shape can be appropriately determined according to the seat position in the front-rear direction of the vehicle and the number of occupants in each seat position.
  • the visual environment information may further include occupant status information, for example, reclining information of the seat in which the occupant is sitting.
  • occupant status information for example, reclining information of the seat in which the occupant is sitting.
  • the number of occupants in each seat position may be adjusted based on the occupant status information, and the screen shape may be determined so as to match the seat position with the largest number of occupants after the adjustment.
  • the screen shape can be appropriately determined by further considering the state of the occupants in addition to the seat positions in the front-rear direction of the vehicle and the number of occupants in each seat position.
  • the viewing environment information includes information on a plurality of viewing positions for viewing the image projected on the screen and the number of viewers in each viewing position, as well as information on each viewing position. It may further include the attribute information of the viewer.
  • the screen shape can be appropriately determined by including the attributes of the viewers at each viewing position in addition to the number of viewers at a plurality of viewing positions and each viewing position.
  • the screen shape is adjusted so as to match the viewing position where the priority viewer is. You may decide. As a result, the priority viewer can see the screen projected image well.
  • the predetermined section is the space inside the vehicle
  • the screen is arranged on the ceiling portion of the vehicle
  • the viewing environment information may include information on the place where the vehicle is traveling.
  • control unit may further control the process of recording the change information of the shape of the screen during viewing of the image projected on the screen in association with the content.
  • the control unit may further control the process of recording the change information of the shape of the screen during viewing of the image projected on the screen in association with the content.
  • the screen shape is determined based on the viewing environment information of the content or the image projected on the screen, and the shape of the screen is deformed and controlled to the determined screen shape. It is possible to provide a good image viewing experience to the viewer in a system that projects an image.
  • the screen shape information shows the shape of the whole area of a screen, or the screen shape for each division area of a screen. It is a figure for demonstrating another example of the metadata which stores the screen shape information. It is a flowchart which shows an example of the procedure of the deformation control of a screen shape based on the content in a CPU. It is a figure which shows the state when the screen is looked up from the position of an occupant. It is a figure which shows the state when there are occupants in the front seat and the rear seat respectively. It is a figure which shows the state which two occupants are sitting in the front seat, and one occupant is sitting in the rear seat.
  • FIG. 1 schematically shows a configuration example of an in-vehicle projection system 10 as an embodiment.
  • a front seat (front seat) 101 located on the front side in the traveling direction
  • a rear seat (rear seat) 102 located on the rear side as interior objects.
  • the front seats 101 and the rear seats 102 each have two seats in the left-right direction. Further, each of the front seat 101 and the rear seat 102 is configured as a reclining seat capable of tilting the backrest rearward.
  • the screen 105 is arranged corresponding to the ceiling portion 103 of the car 100.
  • a display image generated based on the content is projected and displayed on the screen 105 by, for example, a projector 107 arranged at the rear of the car 100.
  • the image displayed on the screen 105 in this way can be viewed by the occupants in the front seat 101 and the rear seat 102.
  • the screen 105 can be deformed into a flat surface, a concave surface, a convex surface, or the like. Further, this shape deformation is capable of not only the shape deformation of the entire surface of the screen 105 but also the shape deformation of each divided region of the screen 105.
  • the shape deformation of the screen 105 is performed by the deformation mechanism 106 arranged between the ceiling portion 103 and the screen 105.
  • the deformation mechanism 106 is configured such that a plurality of actuators 108 are two-dimensionally and evenly arranged on the entire surface of the screen 105.
  • the configuration example of FIG. 1 shows an example in which the deformation mechanism 106 is composed of a plurality of actuators 108.
  • the deformation mechanism 106 is not limited to an example composed of a plurality of actuators 108.
  • a plurality of air balloons 109 are two-dimensionally formed on the entire surface of the screen 105. It may be arranged and configured evenly.
  • the screen shape is determined based on the visual information method of the content or the image projected on the screen 105, and the shape of the screen is controlled to be deformed to the determined screen shape.
  • FIG. 3 is a block diagram showing a configuration example of the in-vehicle projection system 10.
  • the vehicle-mounted projection system 10 includes a projector 107, a deformation mechanism 106, a screen 105, and a sensor group 110.
  • the screen 105 is arranged corresponding to the ceiling portion 103 of the car 100.
  • a display image generated based on the content is projected and displayed on the screen 105 by the projector 107.
  • the deformation mechanism 106 is arranged between the ceiling portion 103 of the car 100 and the screen 105.
  • the deformation mechanism 106 deforms the shape of the screen 105.
  • the sensor group 110 includes various sensors for obtaining viewing environment information of the image projected on the screen.
  • the sensor group 110 is, for example, a sensor for detecting the number of occupants and where the occupants are in the front seat 101 or the rear seat 102 (for example, a seating sensor, a human sensor, an image sensor, etc.), a key person, and the like.
  • Sensors for obtaining attribute information such as company officers, customers, bosses, seniors, colleagues, friends (for example, image sensors), and detect whether the front seat 101 or rear seat 102 on which the occupant is sitting is in a reclining state. It includes sensors for (eg, reclining sensor, image sensor, etc.) and sensors for detecting a place in motion (eg, image sensor, etc.).
  • the projector 107 includes a CPU 120, a ROM 121, a RAM 122, a bus 123, an input / output interface 124, an operation unit 125, an input unit 126, a storage unit 127, a display unit 128, a projection unit 129, and a communication unit.
  • Has 130 a CPU 120, a ROM 121, a RAM 122, a bus 123, an input / output interface 124, an operation unit 125, an input unit 126, a storage unit 127, a display unit 128, a projection unit 129, and a communication unit.
  • Has 130 has 130.
  • the CPU 120 functions as, for example, an arithmetic processing device or a control device, and controls the operation of each component based on, for example, various programs recorded in the ROM 121.
  • the ROM 121 is a means for storing a program read into the CPU 120, data used for calculation, and the like.
  • the RAM 122 for example, a program read into the CPU 120 and various data appropriately changed when the program is executed are temporarily or permanently stored.
  • the CPU 120, ROM 121, and RAM 122 are connected to each other via the bus 123.
  • the bus 123 is connected to various components via the input / output interface 124.
  • the operation unit 125 configures a user interface for the user to perform various operations.
  • a sensor group 110 is connected to the input / output unit 126.
  • the detection signal of each sensor included in the sensor group 110 is supplied to the CPU 120 via the input / output unit 126.
  • the CPU 120 acquires the viewing environment information of the image projected on the screen 105 based on the detection signal of each sensor.
  • the deformation mechanism 106 is connected to the input / output unit 126.
  • the CPU 120 determines the screen shape based on the viewing environment information of the content or the image projected on the screen 105, and controls the deformation mechanism 106 so that the shape of the screen 105 is deformed to the determined screen shape. do.
  • the storage unit 127 is composed of an HDD (Hard Disk Drive) or a flash memory, and stores the content related to the image to be displayed on the screen 105.
  • the content may be stored in advance in the storage unit 127, or may be downloaded from a content server and stored via a network such as the Internet, as will be described later. Further, the storage unit 127 stores the content and the metadata in which the screen shape information associated with the content is stored. Further, the storage unit 127 stores the change information when the user (occupant) operates the operation unit 125 to change the shape of the screen 105 while viewing the image projected on the screen 105.
  • the display unit 128 generates a display image based on the content.
  • This content may be content stored in the storage unit 127, or content streamed from a content server via a network such as the Internet.
  • the projection unit 129 projects the display image generated by the display unit 128 onto the screen 105.
  • the communication unit 130 communicates with a content server, a metadata server described later, or the like via a network such as the Internet, downloads or streams the content from the content server, and further, the content from the content server or the metadata server. Acquires the metadata in which the screen shape information is stored, which is associated with.
  • the communication unit 130 transfers the change information to the content server via a network such as the Internet. Alternatively, send it to the metadata server.
  • the content server or the metadata server it is stored in the metadata associated with the content based on the history of the screen shape used when a large number of people project the image based on the same content on the screen 105. It is possible to set or reset the screen shape information and distribute the screen shape information in association with the content.
  • Determination and control of screen shape The determination of the screen shape will be described. As described above, the shape of the screen 105 is determined based on the content or the viewing environment information of the image projected on the screen 105.
  • FIG. 4 shows a modified example of the shape of the screen 105.
  • the shape of the screen 105 is determined, for example, based on the content (property) of the image projected on the screen 105.
  • FIG. 4A shows an example in which the shape of the screen 105 is a flat surface.
  • the visibility of news, a list of destinations, and itinerary information including information on stop-by points, which are often viewed on a flat surface because of the need for a list can be improved.
  • FIG. 4B shows an example in which the shape of the screen 105 is concave. By making the concave surface so as to give a feeling of a deep space, it is possible to enhance the immersive feeling of the spherical image such as a planetarium.
  • FIG. 4C shows an example in which the shape of the screen 105 is a convex surface. By making it a convex surface, it is possible to obtain a video viewing experience that is not normally experienced. As the image projected in this case, for example, the state of the moon surface can be considered.
  • the screen shape is determined, for example, based on the metadata in which the screen shape information is stored, which is associated with the content related to the projected image.
  • the metadata is acquired from, for example, the storage unit 127.
  • the metadata is acquired from, for example, a content server or a metadata server different from this content server. Even when the content related to the projected image is acquired from the storage unit 127, it is conceivable to acquire the metadata from the metadata server.
  • FIG. 5A shows an example in which the vehicle-mounted projection system 10 is connected to the content server 201 via the network 300.
  • the content server 201 distributes the content related to the projected image to the vehicle-mounted projection system 10 via the network 300.
  • This content contains metadata that stores screen shape information.
  • FIG. 5B shows an example in which the vehicle-mounted projection system 10 is connected to the content server 201 and the metadata server 202 via the network 300.
  • the content server 201 distributes the content related to the projected image to the vehicle-mounted projection system 10 via the network 300.
  • the in-vehicle projection system 10 acquires the metadata in which the screen shape information is stored, which is associated with the content related to the projected image, from the metadata server 202 via the network 300.
  • Example of expressing screen shape information as metadata An example of expressing screen shape information as metadata will be described.
  • the screen shape information stored in the metadata is, for example, as shown in FIG. 6 (a) as an example, when showing a screen shape corresponding to the entire section of the content, or as shown in FIG. 6 (b) as an example.
  • FIG. 6 (a) it is shown that the screen shape of the entire section is concave
  • FIG. 6B the screen shape of the first division section is flat and the screen shape of the next division section is convex.
  • Indicates that the screen shape of the last divided section is flat.
  • the screen shape information stored in the metadata is composed of one or more information sets including the section and the screen shape information.
  • FIG. 7A shows an example of metadata in the case of showing the screen shape corresponding to the entire section of the content (see FIG. 6A).
  • FIG. 7 (b) shows an example of metadata in the case where the screen shape for each divided section in which the entire section of the content is divided is shown (see FIG. 6 (b)).
  • scene “SceneID 3” It consists of a table with an information set of time "00:40:00", end time "01:00:00” and screen shape "flat”.
  • the screen shape information indicates the shape of the entire region of the screen 105 as shown in FIG. 8 (a) as an example
  • the screen shape information of the screen 105 is shown as an example in FIG. 8 (b). It is also conceivable to show the screen shape for each divided area. In the case of FIG. 8A, it is shown that the screen shape of the entire region is concave, and in the case of FIG. 8B, the shape of a part of the center is convex and the shape of the other regions is flat. Is shown.
  • FIG. 9B shows an example of metadata in the case of showing the screen shape for each divided section in which the entire section of the content is divided.
  • the information set of each scene directly has the information of the screen shape, but in the case of the example of FIG. 9 (b), it has the scene mode defined in FIG. 9 (a). I'm letting you.
  • “SceneModeID (base)” indicates the shape of the entire area of the screen 105
  • “SceneModeID (top)” indicates a part of the whole area and a part of the area as described above. The shape to be replaced from the shape of the entire area indicated by “Scene Mode ID (base)” is shown.
  • the transformation mode table may be expanded or expressed by the transformation mode and bitmap data.
  • the flowchart of FIG. 10 shows an example of the procedure for controlling the deformation of the screen shape based on the content in the CPU 120.
  • step ST2 screen shape information (metadata) corresponding to the content is acquired.
  • the content is read from the storage unit 127 in which the content is stored and acquired, or the content is received and acquired from the content server 201 that receives the content or a metadata server 202 different from the content server 201.
  • step ST3 the CPU 120 controls the deformation of the screen 105 based on the acquired screen shape information. After the process of step ST3, the CPU 120 ends the process in step ST4.
  • the shape of the screen 105 is deformed and controlled based on the screen shape information (metadata) associated with the content.
  • metaldata screen shape information
  • FIG. 11 shows a state when the screen 105 is looked up from the position of the occupant.
  • the screen 105 is a flat surface, and when the screen 105 is looked up from the position of the occupant, the angle is so tight that it is difficult to see.
  • the screen 105 is concave, and it is easy to see when the screen 105 is looked up from the position of the occupant. Therefore, in this case, it is desirable that the shape of the screen 105 is deformed into a concave surface.
  • FIG. 12 shows a state when there are occupants in the front seat 101 and the rear seat 102, respectively.
  • the screen 105 is a flat surface
  • the screen 105 is a concave surface. It is conceivable that the occupants of the front seat 101 can easily see the concave surface of the screen 105 rather than the flat surface, and conversely, the occupants of the rear seat 102 can see the concave surface because the angle is too shallow. In such a case, for example, it is conceivable to determine the shape of the screen 105 by the number of occupants in the front seat 101 and the rear seat 102. Thereby, the screen shape can be appropriately determined according to the number of occupants of the front seat 101 and the rear seat 102.
  • FIG. 13 shows a state in which two occupants are sitting in the front seat 101 and one occupant is sitting in the rear seat 102. In this case, it is desirable to deform the screen 105 into a concave surface having a shape that is easy to see from the front seat 101, which has a larger number of passengers.
  • the flowchart of FIG. 14 shows an example of the procedure for controlling the deformation of the screen shape based on the number of occupants sitting in the front seat 101 and the rear seat 102 in the CPU 120.
  • the CPU 120 starts processing in step ST11.
  • step ST12 the CPU 120 checks the number of passengers in the front seat 101 and the number of passengers in the rear seat 102.
  • the CPU 120 checks the number of occupants in each seat based on a sensor output signal such as a seating sensor, a motion sensor, or an image sensor.
  • step ST13 the CPU 120 determines whether or not the number of passengers in the front seat 101 is larger than the number of passengers in the rear seat 102. If the number of occupants in the front seat 101 is not greater than the number of occupants in the rear seat 102, the CPU 120 aligns the shape of the screen 105 with the rear seat 102 in step ST14, that is, a shape that is easy to see in the rear seat 102, for example, a flat surface. Determine and control deformation. After the process of step ST14, the CPU 120 ends the process in step ST15.
  • the shape of the screen 105 is matched to the front seat 101 in step ST16, that is, a shape that is easy to see in the front seat 101, for example, a concave surface. And control the deformation.
  • the CPU 120 ends the processing in step ST15.
  • FIG. 15 shows an example of a rule for deforming the screen shape based on the number of passengers in the front seat 101 and the rear seat 102.
  • the shape of the screen 105 is deformed and controlled according to the rear seat 102.
  • the screen 105 is flat.
  • the occupants of the rear seat 102 are in an easy-to-see state (see FIG. 12A).
  • the shape of the screen 105 is deformed and controlled according to the front seat 101, but in this case, the screen 105 is concave. This makes it easier for the occupants of the front seat 101 to see (see FIGS. 12 (b) and 13).
  • the screen projection image can be viewed well at the seat position where the number of occupants is large.
  • the screen shape can be appropriately determined including the attributes of the occupants sitting in the front seat 101 and the rear seat 102.
  • FIG. 16 shows a state in which two occupants are seated in the front seat 101, one occupant is seated in the rear seat 102, and one occupant in the rear seat 102 is a priority occupant such as a company officer. Shows.
  • the number of occupants in the front seat 101 is larger than the number of occupants in the rear seat 102, but the shape of the screen 105 is determined on a flat surface that is easy to see from the rear seat 102 in which the priority occupant is seated.
  • the flowchart of FIG. 17 shows an example of a procedure for controlling deformation of the screen shape based on the number of occupants of the front seat 101 and the rear seat 102 in the CPU 120, and further, the attributes of the occupants.
  • the CPU 120 starts processing in step ST21.
  • step ST22 the CPU 120 checks the number of passengers in the front seat 101 and the number of passengers in the rear seat 102. In this case, the CPU 120 checks the number of occupants in each seat based on a sensor output signal such as a seating sensor, a motion sensor, or an image sensor.
  • a sensor output signal such as a seating sensor, a motion sensor, or an image sensor.
  • the CPU 120 checks the attributes of the occupants, such as VIPs, company officers, customers, bosses, seniors, colleagues, and friends.
  • the CPU 120 can check the attributes of the occupant by performing face recognition processing or the like based on the image signal which is the sensor output signal of the image sensor, for example. Further, in this case, for example, the attributes of the occupant can be checked by accessing the smartphone, the wearable device, or the like held by the occupant.
  • step ST24 the CPU 120 determines whether or not there is a priority occupant based on the occupant's attribute information.
  • the CPU 120 determines in step ST25 the shape of the screen 105 according to the seat position of the priority occupant, that is, a shape that is easy to see at the seat position, and controls the deformation. For example, when the priority occupant is sitting in the front seat 101, the deformation is controlled to be concave, for example, according to the front seat 101 so that the priority occupant can easily see the occupant. Further, for example, when the priority occupant is sitting in the rear seat 102, the deformation is controlled so as to be in line with the rear seat 102, for example, to make it easy for the priority occupant to see.
  • the CPU 120 ends the process in step ST26.
  • step ST27 the CPU 120 determines in step ST27 whether or not the number of occupants in the front seat 101 is larger than the number of occupants in the rear seat 102. If the number of occupants in the front seat 101 is not greater than the number of occupants in the rear seat 102, the CPU 120 aligns the shape of the screen 105 with the rear seat 102 in step ST28, that is, a shape that is easy to see in the rear seat 102, for example, a flat surface. Determine and control deformation. After the process of step ST28, the CPU 120 ends the process in step ST26.
  • step ST27 when the number of passengers in the front seat 101 is larger than the number of passengers in the rear seat 102 in step ST27, the shape of the screen 105 is matched to the front seat 101 in step ST29, that is, a shape that is easy to see in the front seat 101, for example, a concave surface. And control the deformation.
  • step ST29 the CPU 120 ends the process in step ST26.
  • the screen projection image is basically good at the seat position where the number of occupants is large. However, even if the number of occupants is small, if there is a priority occupant there, the screen projection image can be viewed well at that seat position.
  • FIG. 18 shows a state in which two occupants are sitting in the front seat 101 and two occupants are sitting in the rear seat 102.
  • the shape of the screen 105 is determined to be intermediate between a concave surface (shown by a constant chain line) that is easy to see in the front seat 101 and a flat surface (shown by a two-dot chain line) that is easy to see in the rear seat 102.
  • the flowchart of FIG. 19 shows an example of the procedure for controlling the deformation of the screen shape based on the number of passengers in the front seat 101 and the rear seat 102 in the CPU 120.
  • the CPU 120 starts processing in step ST31.
  • step ST32 the CPU 120 checks the number of passengers in the front seat 101 and the number of passengers in the rear seat 102. In this case, the CPU 120 checks the number of occupants sitting in each seat based on a sensor output signal such as a seating sensor, a motion sensor, or an image sensor.
  • step ST33 the CPU 120 determines the magnitude relationship between the number of passengers in the front seat 101 and the number of passengers in the rear seat 102.
  • the CPU 120 determines in step ST34 that the shape of the screen 105 matches the shape of the rear seat 102, that is, a shape that is easy to see in the rear seat 102, for example, a flat surface. Then, the deformation is controlled.
  • the CPU 120 ends the processing in step ST35.
  • step ST33 When the number of passengers in the front seat 101 is larger than the number of passengers in the rear seat 102 in step ST33, the shape of the screen 105 is matched to the front seat 101 in step ST36, that is, a shape that is easy to see in the front seat 101, for example, a concave surface. And control the deformation.
  • step ST36 the CPU 120 ends the process in step ST35.
  • step ST37 the shape of the screen 105 is changed to an intermediate shape between the front seat alignment and the rear seat alignment, that is, the front seat 101.
  • Deformation control is performed by determining a shape that is between the concave surface that is easy to see and the flat surface that is easy to see in the rear seat 102.
  • the screen projection image can be viewed well at the seat position where the number of passengers is large. Further, when the number of passengers in the front seat 101 and the rear seat 102 is the same, the screen shape is deformed and controlled to an intermediate shape between a concave surface that is easy to see in the front seat 101 and a flat surface that is easy to see in the rear seat 102. It is not so difficult to see even at the position, and it can be viewed.
  • FIG. 20 two occupants are seated in the front seat 101, one occupant is seated in the rear seat 102, and one of the two passengers in the front seat 101 is in a reclining state with its backrest tilted down. It shows the state.
  • the number of passengers in the front seat 101 is larger than the number of passengers in the rear seat 102, but one of the two passengers in the front seat 101 is in the reclining state, and the screen shape is adjusted to the rear seat 102. It is easy to see on a flat surface. Therefore, in this case, the occupants of the front seat 101 are adjusted from two to one, the occupants of the rear seat 102 are adjusted from one to two, and the shape of the screen 105 is formed on a flat surface which is easy to see from the rear seat 102. It is determined.
  • the flowchart of FIG. 21 shows an example of a procedure for controlling deformation of the screen shape based on the number of occupants sitting in the front seat 101 and the rear seat 102 in the CPU 120, and further, the reclining state of the occupants.
  • the CPU 120 starts processing in step ST41.
  • step ST42 the CPU 120 checks the number of passengers in the front seat 101 and the number of passengers in the rear seat 102. In this case, the CPU 120 checks the number of occupants sitting in each seat based on a sensor output signal such as a seating sensor, a motion sensor, or an image sensor.
  • the CPU 120 checks the reclining state of the occupant of the front seat 101 in step ST43.
  • the CPU 120 checks the reclining state of the front seat 101, for example, based on the sensor output signals of the reclining sensor, the image sensor, and the like.
  • step ST44 the CPU 120 determines the number of occupants in the reclining state in the front seat 101. If the number of occupants in the reclining state is 0, the CPU 120 immediately proceeds to the process of step ST45.
  • step ST44 when the number of occupants in the reclining state in step ST44 is one, the CPU 120 reduces the number of occupants in the front seat 101 by one and increases the number of occupants in the rear seat 102 by one in step ST46. Make adjustments. The CPU 120 proceeds to the process of step ST45 after the process of step ST46.
  • step ST44 when the number of occupants in the reclining state in step ST44 is two, the CPU 120 reduces the number of occupants in the front seat 101 by two and increases the number of occupants in the rear seat 102 by two in step ST47. Make adjustments. The CPU 120 proceeds to the process of step ST45 after the process of step ST47.
  • step ST45 the CPU 120 determines whether or not the number of passengers in the front seat 101 is larger than the number of passengers in the rear seat 102. If the number of occupants in the front seat 101 is not greater than the number of occupants in the rear seat 102, the CPU 120 aligns the shape of the screen 105 with the rear seat 102 in step ST48, that is, a shape that is easy to see in the rear seat 102, for example, a flat surface. Determine and control deformation. After the processing of step ST48, the CPU 120 ends the processing in step ST49.
  • step ST45 when the number of passengers in the front seat 101 is larger than the number of passengers in the rear seat 102 in step ST45, the shape of the screen 105 is matched to the front seat 101 in step ST50, that is, a shape that is easy to see in the front seat 101, for example, a concave surface. And control the deformation.
  • step ST50 the CPU 120 ends the processing in step ST49.
  • the screen shape can be appropriately determined and the deformation can be controlled.
  • the state of the occupant is not limited to the reclining state described above, and other states are also conceivable.
  • the state of the occupant may be a state of looking in a direction different from that of the screen 105 or a state of sleeping.
  • the shape of the screen 105 is determined by making adjustments by subtracting from the number of occupants in the front seat 101 and the rear seat 102.
  • the shape of the screen 105 is made to match the shape of the front seat 101 or the rear seat 102, or is made into an intermediate shape.
  • the screen 105 can be deformed more freely, for example, it is conceivable to determine the shape of the front seat 101 and the rear seat 102 so as to be easily seen by the occupants.
  • FIG. 22 shows a state in which the shape of the screen 105 is deformed and controlled into a concave shape that is optimal for each occupant of the front seat 101 and the rear seat 102.
  • FIG. 23 (a) shows a state of traveling in a general city. Further, FIG. 23B shows a state of traveling in a forest on a highway. It should be noted that these figures are views of the occupant and the screen 105 as viewed from the rear.
  • the shape of the screen 105 is determined to be concave and the deformation is controlled. In this case, displaying information on buildings and historic sites on the side of the car on the screen 105 makes it easier to see.
  • the shape of the screen 105 is determined to be a flat surface and deformation control is performed, and information on the destination and the stop-off point may be displayed.
  • the flowchart of FIG. 24 shows an example of the procedure for controlling the deformation of the screen shape based on the traveling location in the CPU 120.
  • the CPU 120 starts processing in step ST61.
  • the CPU 120 checks the traveling location in step ST62.
  • the CPU 120 is traveling, for example, by performing image analysis processing based on an image signal which is a sensor output signal of an image sensor that images the space outside the vehicle, or based on GPS information, navigation system information, and the like. Check the location.
  • the CPU 120 determines the traveling location in step ST63.
  • the CPU 120 determines the shape of the screen 105 as a concave surface in step ST64 and controls the deformation.
  • the CPU 120 ends the process in step ST65.
  • step ST63 when the traveling place is a forest in step ST63, the CPU 120 determines the shape of the screen 105 to be a flat surface in step ST66 and controls the deformation. After the process of step ST66, the CPU 120 ends the process in step ST65.
  • the shape of the screen 105 is determined depending on whether the traveling place is an urban area or a forest, but this is an example, and there are various correspondences between the traveling place and the screen shape to be deformed and controlled. Conceivable.
  • the state of the space outside the vehicle is an example of a place where the vehicle is traveling (city, forest, etc.), but the state of the space outside the vehicle is not limited to this. For example, it may be bright or dark, or it may be congested.
  • the screen shape is determined based on the viewing environment information of the content or the image projected on the screen 105, and the screen 105 is set to the determined screen shape. It controls the deformation of the shape, and can provide the occupant with a good video viewing experience.
  • the screen 105 is arranged in the vehicle interior space, but the present technology is similarly applied to a projection system in which the screen is arranged in another space instead of the vehicle interior space. can.
  • the technology can have the following configurations.
  • (1) The first process of generating a display image based on the content and projecting the generated display image on a screen arranged in a predetermined space, and a viewing environment of the content or the image projected on the screen.
  • An information processing device including a control unit that determines a screen shape based on information and controls a second process of transforming the screen shape into the determined screen shape.
  • the predetermined space is an interior space of the vehicle.
  • the information processing device according to (1) above wherein the screen is arranged corresponding to a ceiling portion of a car.
  • (3) The information processing apparatus according to (2), wherein in the second process, the shape of the screen is deformed by a deformation mechanism arranged between the ceiling portion of the car and the screen.
  • Information processing device described in Crab. (9) The screen shape information stored in the metadata is set based on the history of the screen shape used by a large number of people in the content server or the metadata server when projecting an image based on the content on the screen.
  • the information processing apparatus according to (8) above. (10)
  • the viewing environment information includes information on a plurality of viewing positions for viewing an image projected on the screen and the number of viewers at each viewing position (1) to (3).
  • the information processing apparatus wherein in the second process, the screen shape is determined so as to match the viewing position where the number of viewers is the largest.
  • the predetermined section is an interior space of the vehicle. The screen is placed on the ceiling of the car.
  • the information processing device according to (10) or (11), wherein the viewing environment information includes information on a seat position in the front-rear direction of the vehicle and the number of occupants in each seat position.
  • the viewing environment information further includes state information of the occupant.
  • the number of occupants in each seat position is adjusted based on the occupant status information, and after the adjustment, the screen shape is determined so as to match the seat position with the largest number of occupants.
  • the information processing apparatus according to (13).
  • the viewing environment information further includes attribute information of a viewer at each viewing position.
  • the screen shape is determined so as to match the viewing position where the priority viewer is present.
  • the predetermined section is an interior space of the vehicle.
  • the screen is placed on the ceiling of the car.
  • the information processing device according to any one of (1) to (3) above, wherein the visual environment information includes information on a place where the vehicle is traveling.
  • the control unit further controls a process of recording information on changes in the shape of the screen in association with the content while viewing an image projected on the screen. ..
  • An information processing method comprising a procedure of determining a screen shape based on the viewing environment information of the content or an image projected on the screen, and controlling a process of transforming the shape of the screen into the determined screen shape.
  • Computer Based on the first process of generating a display image based on the content and projecting the generated display image onto a screen arranged in a predetermined space, and the viewing environment information of the content or the image projected on the screen.
  • In-vehicle projection system 100 ... Car 101 ... Front seat (front seat) 102 ... Rear seat (rear seat) 103 ... Ceiling part 105 ... Screen 106 ... Deformation mechanism 107 ... Projector 108 ... Actuator 109 ... Air balloon 110 ... Sensor group 120 ... CPU 121 ... ROM 122 ... RAM 123 ... Bus 124 ... Input / output interface 125 ... Operation unit 126 ... Input / output unit 127 ... Storage unit 128 ... Display unit 129 ... Projection unit 130 ... Communication unit 201 ⁇ ⁇ ⁇ Content server 202 ⁇ ⁇ ⁇ Metadata server 300 ⁇ ⁇ ⁇ Network

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

In the present invention, a system, which projects a video onto a screen, provides a good video viewing experience. The system is provided with a control unit which controls first processing and second processing. The first processing is performed to: generate a video to be displayed on the basis of content; and project the generated video to be displayed onto a screen that is arranged in a predetermined space. The second processing is performed to: determine a screen shape on the basis of the content or information on a viewing environment of the video projected onto the screen; and change the shape of the screen to the determined screen shape.

Description

情報処理装置、情報処理方法およびプログラムInformation processing equipment, information processing methods and programs
 本技術は、情報処理装置、情報処理方法およびプログラムに関し、詳しくは、スクリーンに映像を投影するシステムにおいて良好な映像観視体験を提供する情報処理装置等に関する。 This technology relates to information processing devices, information processing methods and programs, and more specifically to information processing devices that provide a good image viewing experience in a system that projects images on a screen.
 自動車の車室内には情報取得や楽しむ目的で6~15インチ程度の液晶や有機EL(Electronic Luminescent)のディスプレイが備えられていることがある。このディスプレイで観視する映像は、例えばテレビ番組や映画、ナビゲーション情報や目的地、立寄地の情報などである。このディスプレイによる映像表示は、その構造上、平面や緩い曲面形状の表示に限定される。 The interior of a car may be equipped with a 6 to 15 inch liquid crystal display or an organic EL (Electronic Luminescent) display for the purpose of acquiring information and enjoying it. The images to be viewed on this display are, for example, TV programs, movies, navigation information, destinations, stop-by information, and the like. Due to its structure, the image display by this display is limited to the display of a flat surface or a loose curved surface shape.
 従来、プロジェクターが知られている。プロジェクターは、映像を投影する投影面の形状が自由であり、その特性を活かして立体物に映像を貼り合わせるように投影するプロジェクションマッピング等でも利用されている。自動車内でもプロジェクター利用が検討されている。例えば、特許文献1には、サイドウインドウにプロジェクター(投影部)から映像を投影するシステムが開示されている。 Conventionally, projectors have been known. The projector has a free shape of the projection surface on which the image is projected, and is also used in projection mapping or the like in which the image is projected so as to be bonded to a three-dimensional object by taking advantage of its characteristics. The use of projectors in automobiles is also being considered. For example, Patent Document 1 discloses a system for projecting an image from a projector (projection unit) on a side window.
特開2017-193190号公報Japanese Unexamined Patent Publication No. 2017-193190
 本技術の目的は、スクリーンに映像を投影するシステムにおいて良好な映像観視体験を提供することにある。 The purpose of this technology is to provide a good video viewing experience in a system that projects video on a screen.
 本技術の概念は、
 コンテンツに基づいて表示映像を生成し、該生成された表示映像を所定空間に配置されたスクリーンに投影する第1の処理と、前記コンテンツまたは前記スクリーンに投影される映像の観視環境情報に基づいてスクリーン形状を決定し、該決定されたスクリーン形状に前記スクリーンの形状を変形する第2の処理を制御する制御部を備える
 情報処理装置にある。
The concept of this technology is
Based on the first process of generating a display image based on the content and projecting the generated display image onto a screen arranged in a predetermined space, and the viewing environment information of the content or the image projected on the screen. The information processing apparatus includes a control unit that controls a second process of determining the screen shape and transforming the screen shape into the determined screen shape.
 本技術においては、第1の処理と第2の処理を制御する制御部を備えている。第1の処理は、コンテンツに基づいて表示映像を生成し、該生成された表示映像を所定空間に配置されたスクリーンに投影する処理である。例えば、所定空間は車内空間であり、スクリーンは車の天井部分に配置されていてもよい。スクリーンが天井部分に配置されることで、広い天井部分の有効利用が可能となる。 The present technology includes a control unit that controls the first process and the second process. The first process is a process of generating a display image based on the content and projecting the generated display image onto a screen arranged in a predetermined space. For example, the predetermined space is an interior space of the car, and the screen may be arranged on the ceiling portion of the car. By arranging the screen on the ceiling part, it is possible to effectively use the wide ceiling part.
 また、第2の処理は、コンテンツまたはスクリーンに投影される映像の観視環境情報に基づいてスクリーン形状を決定し、該決定されたスクリーン形状にスクリーンの形状を変形する処理である。例えば、第2処理では、スクリーンの形状変形を、車の天井部分とスクリーンの間に配置された変形機構により行われてもよい。変形機構が車の天井部分とスクリーンの間に配置されることで、変形機構により車内の美観を損ねる等の不都合が防止される。 The second process is a process of determining the screen shape based on the viewing environment information of the content or the image projected on the screen, and transforming the screen shape into the determined screen shape. For example, in the second process, the shape of the screen may be deformed by a deformation mechanism arranged between the ceiling portion of the car and the screen. By arranging the deformation mechanism between the ceiling portion of the car and the screen, the deformation mechanism prevents inconveniences such as spoiling the aesthetic appearance of the inside of the car.
 また、例えば、第2の処理では、スクリーンに投影される映像の内容に基づいてスクリーン形状を決定してもよい。投影映像の内容に基づいてスクリーン形状が決定されることで、観視者に投影映像に合った良好な映像観視体験を提供できる。 Further, for example, in the second process, the screen shape may be determined based on the content of the image projected on the screen. By determining the screen shape based on the content of the projected image, it is possible to provide the viewer with a good image viewing experience that matches the projected image.
 また、例えば、第2の処理では、コンテンツに対応付けされた、スクリーン形状情報が格納されたメタデータを取得し、該取得されたメタデータに基づいてスクリーン形状を決定してもよい。コンテンツに対応付けされたメタデータに基づいてスクリーン形状が決定されることで、スクリーン形状をコンテンツに合ったものに簡単かつ適切に決定できる。 Further, for example, in the second process, the metadata associated with the content and storing the screen shape information may be acquired, and the screen shape may be determined based on the acquired metadata. By determining the screen shape based on the metadata associated with the content, the screen shape can be easily and appropriately determined to match the content.
 この場合、例えば、スクリーン形状情報は、区間とスクリーン形状の情報を含む情報組が1つまたは複数で構成されてもよい。これにより、コンテンツの区間毎にスクリーン形状を適切に決定できる。例えば、スクリーン形状の情報は、スクリーンの全領域(全面)の形状またはスクリーンの分割された各領域の形状を示すようにされてもよい。これにより、スクリーン形状を全領域または分割された各領域で適切に決定できる。 In this case, for example, the screen shape information may be composed of one or a plurality of information sets including the section and the screen shape information. As a result, the screen shape can be appropriately determined for each section of the content. For example, the screen shape information may be made to indicate the shape of the entire area (entire surface) of the screen or the shape of each divided area of the screen. This makes it possible to appropriately determine the screen shape in all areas or in each divided area.
 また、この場合、例えば、第2の処理では、メタデータを、ネットワークを介して、コンテンツを取得するコンテンツサーバまたは該コンテンツサーバとは異なるメタデータサーバから取得してもよい。これにより、コンテンツに対応付けされたメタデータを外部サーバから取得して利用できる。 Further, in this case, for example, in the second process, the metadata may be acquired from a content server for acquiring the content or a metadata server different from the content server via the network. As a result, the metadata associated with the content can be acquired from the external server and used.
 例えば、メタデータに格納されるスクリーン形状情報は、コンテンツサーバまたはメタデータサーバにおいて、多人数がコンテンツに基づく映像をスクリーンに投影する際に使用したスクリーン形状の履歴に基づいて設定されてもよい。これにより、スクリーン形状を多人数が使用した良好な形状に決定できる。 For example, the screen shape information stored in the metadata may be set based on the history of the screen shape used by a large number of people when projecting an image based on the content on the screen in the content server or the metadata server. As a result, the screen shape can be determined to be a good shape used by a large number of people.
 例えば、観視環境情報は、スクリーンに投影された映像を観視する複数の観視位置と各観視位置にいる観視者の人数の情報を含んでいてもよい。このように観視環境情報が複数の観視位置と各観視位置にいる観視者の人数の情報を含むことで、複数の観視位置と各観視位置にいる観視者の人数に応じてスクリーン形状を適切に決定することができる。 For example, the viewing environment information may include information on a plurality of viewing positions for viewing the image projected on the screen and the number of viewers at each viewing position. In this way, the viewing environment information includes information on a plurality of viewing positions and the number of viewers at each viewing position, so that the number of viewers at a plurality of viewing positions and each viewing position can be increased. The screen shape can be appropriately determined accordingly.
 この場合、例えば、第2の処理では、観視者の人数が最も多い観視位置に合うようにスクリーン形状を決定してもよい。これにより、観視者の人数が最も多い観視位置ではスクリーン投影映像を良好に観視可能となる。 In this case, for example, in the second process, the screen shape may be determined so as to match the viewing position where the number of viewers is the largest. As a result, the screen-projected image can be viewed well at the viewing position where the number of viewers is the largest.
 また、この場合、例えば、所定区間は車内空間であり、スクリーンは車の天井部分に配置されており、観視環境情報は、車の前後方向の座席位置と各座席位置にいる乗員の人数の情報を含んでいてもよい。これにより、車の前後方向の座席位置と各座席位置にいる乗員の人数に応じてスクリーン形状を適切に決定することができる。 Further, in this case, for example, the predetermined section is the space inside the car, the screen is arranged on the ceiling of the car, and the viewing environment information is the seat position in the front-rear direction of the car and the number of occupants in each seat position. It may contain information. As a result, the screen shape can be appropriately determined according to the seat position in the front-rear direction of the vehicle and the number of occupants in each seat position.
 例えば、観視環境情報は、乗員の状態情報、例えば乗員が座っている座席のリクライニング情報をさらに含んでいてもよい。例えば、第2の処理では、乗員の状態情報により各座席位置にいる乗員の人数を調整し、調整後に乗員の人数が最も多い座席位置に合うようにスクリーン形状を決定してもよい。これにより、車の前後方向の座席位置と各座席位置にいる乗員の人数にさらに乗員の状態を加味してスクリーン形状を適切に決定することができる。 For example, the visual environment information may further include occupant status information, for example, reclining information of the seat in which the occupant is sitting. For example, in the second process, the number of occupants in each seat position may be adjusted based on the occupant status information, and the screen shape may be determined so as to match the seat position with the largest number of occupants after the adjustment. As a result, the screen shape can be appropriately determined by further considering the state of the occupants in addition to the seat positions in the front-rear direction of the vehicle and the number of occupants in each seat position.
 また、例えば、観視環境情報は、スクリーンに投影された映像を観視する複数の観視位置と各観視位置にいる観視者の人数の情報の他に、さらに各観視位置にいる観視者の属性情報をさらに含んでいてもよい。これにより、複数の観視位置と各観視位置にいる観視者の人数にさらに各観視位置にいる観視者の属性を含めてスクリーン形状を適切に決定できる。 Further, for example, the viewing environment information includes information on a plurality of viewing positions for viewing the image projected on the screen and the number of viewers in each viewing position, as well as information on each viewing position. It may further include the attribute information of the viewer. As a result, the screen shape can be appropriately determined by including the attributes of the viewers at each viewing position in addition to the number of viewers at a plurality of viewing positions and each viewing position.
 この場合、例えば、第2の処理では、属性情報で示される優先観視者、例えば要人、会社役員などが存在するとき、該優先観視者がいる観視位置に合うようにスクリーン形状を決定してもよい。これにより、優先観視者はスクリーン投影映像を良好に観視可能となる。 In this case, for example, in the second process, when there is a priority viewer indicated by the attribute information, for example, a VIP, a company officer, or the like, the screen shape is adjusted so as to match the viewing position where the priority viewer is. You may decide. As a result, the priority viewer can see the screen projected image well.
 また、例えば、所定区間は車内空間であり、スクリーンは車の天井部分に配置されており、観視環境情報は走行中の場所の情報を含んでいてもよい。このように観視環境情報が走行中の場所の情報を含むことで、走行中の場所に応じてスクリーン形状を適切に決定することができる。 Further, for example, the predetermined section is the space inside the vehicle, the screen is arranged on the ceiling portion of the vehicle, and the viewing environment information may include information on the place where the vehicle is traveling. By including the information on the traveling place in the viewing environment information in this way, the screen shape can be appropriately determined according to the traveling place.
 また、例えば、制御部は、スクリーンに投影される映像の観視中におけるスクリーンの形状の変更情報をコンテンツに関連付けて記録する処理をさらに制御してもよい。このようにスクリーンの形状の変更情報をコンテンツに関連付けて記録することで、同一のコンテンツに係る映像をスクリーンに投影する際にスクリーン形状をその変更情報に基づいて決定することが可能となる。 Further, for example, the control unit may further control the process of recording the change information of the shape of the screen during viewing of the image projected on the screen in association with the content. By recording the change information of the screen shape in association with the content in this way, it is possible to determine the screen shape based on the change information when projecting an image related to the same content on the screen.
 このように本技術においては、コンテンツまたはスクリーンに投影される映像の観視環境情報に基づいてスクリーン形状を決定し、該決定されたスクリーン形状にスクリーンの形状を変形制御するものであり、スクリーンに映像を投影するシステムにおいて観視者に良好な映像観視体験を提供できる。 As described above, in the present technology, the screen shape is determined based on the viewing environment information of the content or the image projected on the screen, and the shape of the screen is deformed and controlled to the determined screen shape. It is possible to provide a good image viewing experience to the viewer in a system that projects an image.
実施の形態としての車載投影システムの構成例を示す図である。It is a figure which shows the configuration example of the vehicle-mounted projection system as an embodiment. 変形機構の構成を説明するための図である。It is a figure for demonstrating the structure of a deformation mechanism. 投影システムの構成例を示すブロック図である。It is a block diagram which shows the configuration example of a projection system. スクリーンの形状の変形例を示す図である。It is a figure which shows the deformation example of the shape of a screen. 車載投影システムが接続されるコンテンツサーバやメタデータサーバを説明するための図である。It is a figure for demonstrating the content server and the metadata server to which an in-vehicle projection system is connected. スクリーン形状情報がコンテンツの全区間に対応したスクリーン形状またはコンテンツの分割区間毎のスクリーン形状を示すことを説明するための図である。It is a figure for demonstrating that the screen shape information shows the screen shape corresponding to the whole section of the content, or the screen shape for each division section of a content. スクリーン形状情報が格納されるメタデータの一例を説明するための図である。It is a figure for demonstrating an example of the metadata which stores the screen shape information. スクリーン形状情報がスクリーンの全領域の形状、またはスクリーンの分割領域毎のスクリーン形状を示すことを説明するための図である。It is a figure for demonstrating that the screen shape information shows the shape of the whole area of a screen, or the screen shape for each division area of a screen. スクリーン形状情報が格納されるメタデータの他の一例を説明するための図である。It is a figure for demonstrating another example of the metadata which stores the screen shape information. CPUにおけるコンテンツに基づいたスクリーン形状の変形制御の手順の一例を示すフローチャートである。It is a flowchart which shows an example of the procedure of the deformation control of a screen shape based on the content in a CPU. 乗員の位置からスクリーンを見上げた場合の状態を示す図である。It is a figure which shows the state when the screen is looked up from the position of an occupant. 前座席および後座席にそれぞれ乗員がいる場合の状態を示す図である。It is a figure which shows the state when there are occupants in the front seat and the rear seat respectively. 前座席に2人の乗員が座っており、後座席に1人の乗員が座っている状態を示す図である。It is a figure which shows the state which two occupants are sitting in the front seat, and one occupant is sitting in the rear seat. CPUにおける前座席と後座席に座っている乗員の人数に基づいたスクリーン形状の変形制御の手順の一例を示すフローチャートである。It is a flowchart which shows an example of the procedure of the deformation control of a screen shape based on the number of occupants sitting in the front seat and the rear seat in a CPU. 前座席と後座席の乗員の人数に基づいてスクリーン形状を変形させる際のルールの一例を示す図である。It is a figure which shows an example of the rule when the screen shape is deformed based on the number of passengers of a front seat and a rear seat. 前座席に2人の乗員が座っており、後座席に1人の乗員が座っており、後座席の1人の乗員は例えば会社役員などの優先乗員である状態を示す図である。It is a figure which shows the state that two occupants are sitting in the front seat, one occupant is sitting in the rear seat, and one occupant in the rear seat is a priority occupant such as a company officer. CPUにおける前座席と後座席の乗員の人数、さらには乗員の属性に基づいたスクリーン形状の変形制御の手順の一例を示すフローチャートである。It is a flowchart which shows an example of the procedure of the deformation control of a screen shape based on the number of occupants of a front seat and a rear seat in a CPU, and also the attribute of an occupant. 前座席に2人の乗員が座っており、後座席に2人の乗員が座っている状態を示す図である。It is a figure which shows the state which two occupants are sitting in the front seat, and two occupants are sitting in the rear seat. CPUにおける前座席と後座席102の乗員の人数に基づいたスクリーン形状の変形制御の手順の一例を示すフローチャートである。It is a flowchart which shows an example of the procedure of the deformation control of a screen shape based on the number of passengers of a front seat and a rear seat 102 in a CPU. 前座席に2人の乗員が座っており、後座席に1人の乗員が座っており、前座席の2人のうち1人はその背もたれを倒してリクライニング状態にある状態を示す図である。It is a figure which shows the state in which two occupants are sitting in the front seat, one occupant is sitting in the rear seat, and one of the two in the front seat is in a reclining state with its backrest tilted down. CPUにおける前座席と後座席に座っている乗員の人数、さらには乗員のリクライニング状態に基づいたスクリーン形状の変形制御の手順の一例を示すフローチャートである。It is a flowchart which shows an example of the procedure of the deformation control of a screen shape based on the number of occupants sitting in the front seat and the rear seat in the CPU, and also the reclining state of the occupant. スクリーンの形状を前座席および後座席のそれぞれの乗員に最適な凹面形状に変形制御した状態を示す図である。It is a figure which shows the state which the shape of the screen is deformed and controlled into the concave shape which is most suitable for the occupant of each of the front seat and the rear seat. 車の走行場所とスクリーン形状の変形制御との対応関係の一例を示す図である。It is a figure which shows an example of the correspondence relation between the traveling place of a car, and the deformation control of a screen shape. CPUにおける走行中の場所に基づいたスクリーン形状の変形制御の手順の一例を示すフローチャートである。It is a flowchart which shows an example of the procedure of the deformation control of a screen shape based on the traveling place in a CPU.
 以下、発明を実施するための形態(以下、「実施の形態」とする)について説明する。なお、説明を以下の順序で行う。
 1.実施の形態
 2.変形例
Hereinafter, embodiments for carrying out the invention (hereinafter referred to as “embodiments”) will be described. The explanations will be given in the following order.
1. 1. Embodiment 2. Modification example
 <1.実施の形態>
 「車載投影システムの構成」
 図1は、実施の形態としての車載投影システム10の構成例を概略的に示している。車100の内部、つまり車内空間には、内装オブジェクトとして進行方向の前側に位置する前座席(前席シート)101と、その後側に位置する後座席(後席シート)102が存在する。なお、前座席101および後座席102は、それぞれ、左右方向に2席存在している。また、前座席101および後座席102のそれぞれは、背もたれを後方に傾斜できるリクライニングシートとして構成されている。
<1. Embodiment>
"Configuration of in-vehicle projection system"
FIG. 1 schematically shows a configuration example of an in-vehicle projection system 10 as an embodiment. Inside the car 100, that is, in the space inside the car, there are a front seat (front seat) 101 located on the front side in the traveling direction and a rear seat (rear seat) 102 located on the rear side as interior objects. The front seats 101 and the rear seats 102 each have two seats in the left-right direction. Further, each of the front seat 101 and the rear seat 102 is configured as a reclining seat capable of tilting the backrest rearward.
 車100の天井部分103に対応してスクリーン105が配置されている。このスクリーン105には、例えば車100の後部に配設されたプロジェクター107によりコンテンツに基づいて生成された表示映像が投影されて表示される。このようにスクリーン105に表示された映像は、前座席101や後座席102にいる乗員により観視可能となる。 The screen 105 is arranged corresponding to the ceiling portion 103 of the car 100. A display image generated based on the content is projected and displayed on the screen 105 by, for example, a projector 107 arranged at the rear of the car 100. The image displayed on the screen 105 in this way can be viewed by the occupants in the front seat 101 and the rear seat 102.
 スクリーン105は、形状変形、例えば平面、凹面、凸面等への形状変形が可能とされている。また、この形状変形は、スクリーン105の全面の形状変形だけではなく、スクリーン105の分割された各領域の形状変形も可能とされている。このスクリーン105の形状変形は、天井部分103とスクリーン105の間に配置された変形機構106により行われる。 The screen 105 can be deformed into a flat surface, a concave surface, a convex surface, or the like. Further, this shape deformation is capable of not only the shape deformation of the entire surface of the screen 105 but also the shape deformation of each divided region of the screen 105. The shape deformation of the screen 105 is performed by the deformation mechanism 106 arranged between the ceiling portion 103 and the screen 105.
 変形機構106は、例えば、図2(a)に示すように、スクリーン105の全面に複数のアクチュエータ108が二次元的に満遍なく配置されて構成されている。図1の構成例では、変形機構106が複数のアクチュエータ108で構成されている例を示している。なお、変形機構106は、複数のアクチュエータ108で構成される例に限定されるものではなく、例えば図2(b)に示すように、スクリーン105の全面に複数のエアバルーン109が二次元的に満遍なく配置されて構成されてもよい。 For example, as shown in FIG. 2A, the deformation mechanism 106 is configured such that a plurality of actuators 108 are two-dimensionally and evenly arranged on the entire surface of the screen 105. The configuration example of FIG. 1 shows an example in which the deformation mechanism 106 is composed of a plurality of actuators 108. The deformation mechanism 106 is not limited to an example composed of a plurality of actuators 108. For example, as shown in FIG. 2B, a plurality of air balloons 109 are two-dimensionally formed on the entire surface of the screen 105. It may be arranged and configured evenly.
 この実施の形態において、コンテンツまたはスクリーン105に投影される映像の観視情報法に基づいてスクリーン形状が決定され、該決定されたスクリーン形状にスクリーンの形状が変形制御される。このように構成されることで、観視者である車100の乗員に良好な映像環視体験を提供できる。 In this embodiment, the screen shape is determined based on the visual information method of the content or the image projected on the screen 105, and the shape of the screen is controlled to be deformed to the determined screen shape. With this configuration, it is possible to provide a good video viewing experience to the occupants of the vehicle 100, which is the viewer.
 図3は、車載投影システム10の構成例を示すブロック図である。車載投影システム10は、プロジェクター107と、変形機構106と、スクリーン105と、センサー群110を有している。スクリーン105は、上述したように、車100の天井部分103に対応して配置されている。このスクリーン105には、プロジェクター107により、コンテンツに基づいて生成された表示映像が投影されて表示される。 FIG. 3 is a block diagram showing a configuration example of the in-vehicle projection system 10. The vehicle-mounted projection system 10 includes a projector 107, a deformation mechanism 106, a screen 105, and a sensor group 110. As described above, the screen 105 is arranged corresponding to the ceiling portion 103 of the car 100. A display image generated based on the content is projected and displayed on the screen 105 by the projector 107.
 変形機構106は、上述したように、車100の天井部分103とスクリーン105の間に配置されている。この変形機構106により、スクリーン105の形状変形が行われる。センサー群110は、スクリーンに投影される映像の観視環境情報を得るための種々のセンサーを含む。 As described above, the deformation mechanism 106 is arranged between the ceiling portion 103 of the car 100 and the screen 105. The deformation mechanism 106 deforms the shape of the screen 105. The sensor group 110 includes various sensors for obtaining viewing environment information of the image projected on the screen.
 このセンサー群110は、例えば、乗員の人数や、その乗員が前座席101や後座席102のどこにいるかを検出するためのセンサー(例えば、着座センサー、人感センサー、イメージセンサーなど)、要人、会社役員、お客様、上司、先輩、同僚、友人などの属性情報を得るためのセンサー(例えば、イメージセンサーなど)、乗員が座っている前座席101や後座席102がリクライニング状態にあるかを検出するためのセンサー(例えば、リクライニングセンサー、イメージセンサーなど)、走行中の場所を検出するためのセンサー(例えば、イメージセンサーなど)を含んでいる。 The sensor group 110 is, for example, a sensor for detecting the number of occupants and where the occupants are in the front seat 101 or the rear seat 102 (for example, a seating sensor, a human sensor, an image sensor, etc.), a key person, and the like. Sensors for obtaining attribute information such as company officers, customers, bosses, seniors, colleagues, friends (for example, image sensors), and detect whether the front seat 101 or rear seat 102 on which the occupant is sitting is in a reclining state. It includes sensors for (eg, reclining sensor, image sensor, etc.) and sensors for detecting a place in motion (eg, image sensor, etc.).
 プロジェクター107は、CPU120と、ROM121と、RAM122と、バス123と、入出力インターフェース124と、操作部125と、入力部126と、記憶部127と、表示部128と、投影部129と、通信部130を有している。 The projector 107 includes a CPU 120, a ROM 121, a RAM 122, a bus 123, an input / output interface 124, an operation unit 125, an input unit 126, a storage unit 127, a display unit 128, a projection unit 129, and a communication unit. Has 130.
 CPU120は、例えば、演算処理装置または制御装置として機能し、例えば、ROM121に記録された各種プログラムに基づいて各構成要素の動作を制御する。ROM121は、CPU120に読み込まれるプログラムや演算に用いるデータ等を格納する手段である。RAM122には、例えば、CPU120に読み込まれるプログラムや、そのプログラムを実行する際に適宜変化する各種データが一時的または永続的に格納される。 The CPU 120 functions as, for example, an arithmetic processing device or a control device, and controls the operation of each component based on, for example, various programs recorded in the ROM 121. The ROM 121 is a means for storing a program read into the CPU 120, data used for calculation, and the like. In the RAM 122, for example, a program read into the CPU 120 and various data appropriately changed when the program is executed are temporarily or permanently stored.
 CPU120、ROM121、RAM122は、バス123を介して相互に接続される。一方、バス123には、入出力インターフェース124を介して種々の構成要素と接続される。 The CPU 120, ROM 121, and RAM 122 are connected to each other via the bus 123. On the other hand, the bus 123 is connected to various components via the input / output interface 124.
 操作部125には、例えば、ユーザが種々の操作を行うためのユーザインタフェースを構成している。入出力部126にはセンサー群110が接続されている。センサー群110に含まれる各センサーの検出信号は入出力部126を介してCPU120に供給される。CPU120は、各センサーの検出信号に基づいてスクリーン105に投影される映像の観視環境情報を取得する。 The operation unit 125, for example, configures a user interface for the user to perform various operations. A sensor group 110 is connected to the input / output unit 126. The detection signal of each sensor included in the sensor group 110 is supplied to the CPU 120 via the input / output unit 126. The CPU 120 acquires the viewing environment information of the image projected on the screen 105 based on the detection signal of each sensor.
 また、入出力部126には、変形機構106が接続されている。CPU120は、コンテンツまたはスクリーン105に投影される映像の観視環境情報に基づいてスクリーン形状を決定し、スクリーン105の形状が、その決定されたスクリーン形状に変形されるように、変形機構106を制御する。 Further, the deformation mechanism 106 is connected to the input / output unit 126. The CPU 120 determines the screen shape based on the viewing environment information of the content or the image projected on the screen 105, and controls the deformation mechanism 106 so that the shape of the screen 105 is deformed to the determined screen shape. do.
 記憶部127は、HDD(Hard Disk Drive)またはフラッシュメモリで構成され、スクリーン105に表示すべき映像に係るコンテンツを記憶する。記憶部127には予めコンテンツが記憶されていてもよく、あるいは後述するように、インターネット等のネットワークを介して、コンテンツサーバからダウンロードされて記憶されてもよい。また、記憶部127は、コンテンツと共に、それに対応付けされた、スクリーン形状情報が格納されたメタデータを記憶する。さらに、記憶部127は、スクリーン105に投影される映像の観視中にユーザ(乗員)が操作部125を操作してスクリーン105の形状を変更した場合に、その変更情報を記憶する。 The storage unit 127 is composed of an HDD (Hard Disk Drive) or a flash memory, and stores the content related to the image to be displayed on the screen 105. The content may be stored in advance in the storage unit 127, or may be downloaded from a content server and stored via a network such as the Internet, as will be described later. Further, the storage unit 127 stores the content and the metadata in which the screen shape information associated with the content is stored. Further, the storage unit 127 stores the change information when the user (occupant) operates the operation unit 125 to change the shape of the screen 105 while viewing the image projected on the screen 105.
 表示部128は、コンテンツに基づいて表示映像を生成する。このコンテンツは、記憶部127に記憶されているコンテンツ、あるいはインターネット等のネットワークを介して、コンテンツサーバからストリーミングされるコンテンツであってもよい。投影部129は、表示部128で生成された表示映像をスクリーン105に投影する。 The display unit 128 generates a display image based on the content. This content may be content stored in the storage unit 127, or content streamed from a content server via a network such as the Internet. The projection unit 129 projects the display image generated by the display unit 128 onto the screen 105.
 通信部130は、インターネット等のネットワークを介して、コンテンツサーバ、さらには後述するメタデータサーバ等と通信を行って、コンテンツサーバからコンテンツをダウンロードあるいはストリーミングし、さらにはコンテンツサーバあるいはメタデータサーバからコンテンツに対応付けされた、スクリーン形状情報が格納されたメタデータを取得する。 The communication unit 130 communicates with a content server, a metadata server described later, or the like via a network such as the Internet, downloads or streams the content from the content server, and further, the content from the content server or the metadata server. Acquires the metadata in which the screen shape information is stored, which is associated with.
 また、通信部130は、スクリーン105に投影される映像の観視中にユーザ(乗員)がスクリーン105の形状を変更した場合に、その変更情報を、インターネット等のネットワークを介して、コンテンツサーバ、あるいはメタデータサーバに送信する。これにより、コンテンツサーバ、あるいはメタデータサーバでは、多人数が同一コンテンツに基づく映像をスクリーン105に投影する際に使用したスクリーン形状の履歴に基づいて、そのコンテンツに対応付けされるメタデータに格納されるスクリーン形状情報を設定、あるいは再設定し、そのスクリーン形状情報をコンテンツに対応付けて配信することが可能となる。 Further, when the user (occupant) changes the shape of the screen 105 while viewing the image projected on the screen 105, the communication unit 130 transfers the change information to the content server via a network such as the Internet. Alternatively, send it to the metadata server. As a result, in the content server or the metadata server, it is stored in the metadata associated with the content based on the history of the screen shape used when a large number of people project the image based on the same content on the screen 105. It is possible to set or reset the screen shape information and distribute the screen shape information in association with the content.
 「スクリーン形状の決定・制御」
 スクリーン形状の決定について説明する。上述したように、スクリーン105の形状は、コンテンツまたはスクリーン105に投影される映像の観視環境情報に基づいて決定される。
"Determination and control of screen shape"
The determination of the screen shape will be described. As described above, the shape of the screen 105 is determined based on the content or the viewing environment information of the image projected on the screen 105.
 「コンテンツに基づく決定・制御」
 最初に、スクリーン形状がコンテンツに基づいて決定・制御される場合について説明する。図4は、スクリーン105の形状の変形例を示している。スクリーン105の形状は、例えば、スクリーン105に投影される映像の内容(性質)に基づいて決定される。このように投影される映像の内容(性質)に基づいて決定されたスクリーン形状にスクリーン105の形状が変形されることで、乗員の映像観視体験を向上させることが可能となる。
"Content-based decisions and controls"
First, a case where the screen shape is determined and controlled based on the content will be described. FIG. 4 shows a modified example of the shape of the screen 105. The shape of the screen 105 is determined, for example, based on the content (property) of the image projected on the screen 105. By transforming the shape of the screen 105 into the screen shape determined based on the content (property) of the image projected in this way, it is possible to improve the image viewing experience of the occupant.
 図4(a)は、スクリーン105の形状が平面とされた例を示している。一覧性が必要で普段から平面で観視する機会の多いニュースや目的地のリスト、立ち寄り地の情報を含む旅程情報などはスクリーン105を平面とすることで視認性が高まる。図4(b)は、スクリーン105の形状が凹面とされた例を示している。奥行きのある空間を感じさせるように凹面とすることで、例えばプラネタリウム等の全天球映像の没入感を高めることができる。図4(c)は、スクリーン105の形状が凸面とされた例を示している。凸面とすることで、通常あまり体験しないような映像観視体験が得られる。この場合に投影される映像としては、例えば、月面の様子などが考えられる。 FIG. 4A shows an example in which the shape of the screen 105 is a flat surface. By using the screen 105 as a flat surface, the visibility of news, a list of destinations, and itinerary information including information on stop-by points, which are often viewed on a flat surface because of the need for a list, can be improved. FIG. 4B shows an example in which the shape of the screen 105 is concave. By making the concave surface so as to give a feeling of a deep space, it is possible to enhance the immersive feeling of the spherical image such as a planetarium. FIG. 4C shows an example in which the shape of the screen 105 is a convex surface. By making it a convex surface, it is possible to obtain a video viewing experience that is not normally experienced. As the image projected in this case, for example, the state of the moon surface can be considered.
 スクリーン形状の決定は、例えば、投影映像に係るコンテンツに対応付けされた、スクリーン形状情報が格納されたメタデータに基づいて行われる。投影映像に係るコンテンツを記憶部127から取得する場合、メタデータは、例えば記憶部127から取得する。また、投影映像に係るコンテンツが、インターネット等のネットワークを介して、コンテンツサーバからストリーミングで取得する場合、メタデータは、例えばコンテンツサーバ、あるいはこのコンテンツサーバとは異なるメタデータサーバから取得する。なお、投影映像に係るコンテンツを記憶部127から取得する場合であっても、メタデータをメタデータサーバから取得することも考えられる。 The screen shape is determined, for example, based on the metadata in which the screen shape information is stored, which is associated with the content related to the projected image. When the content related to the projected image is acquired from the storage unit 127, the metadata is acquired from, for example, the storage unit 127. Further, when the content related to the projected image is acquired by streaming from the content server via a network such as the Internet, the metadata is acquired from, for example, a content server or a metadata server different from this content server. Even when the content related to the projected image is acquired from the storage unit 127, it is conceivable to acquire the metadata from the metadata server.
 図5(a)は、車載投影システム10がネットワーク300を介してコンテンツサーバ201に接続されている例を示している。この場合、コンテンツサーバ201は、投影映像に係るコンテンツを、ネットワーク300を介して、車載投影システム10に配信する。このコンテンツには、スクリーン形状情報が格納されたメタデータが含まれている。 FIG. 5A shows an example in which the vehicle-mounted projection system 10 is connected to the content server 201 via the network 300. In this case, the content server 201 distributes the content related to the projected image to the vehicle-mounted projection system 10 via the network 300. This content contains metadata that stores screen shape information.
 図5(b)は、車載投影システム10がネットワーク300を介してコンテンツサーバ201およびメタデータサーバ202に接続されている例を示している。この場合、コンテンツサーバ201は、投影映像に係るコンテンツを、ネットワーク300を介して、車載投影システム10に配信する。そして、車載投影システム10は、メタデータサーバ202から、投影映像に係るコンテンツに対応付けされた、スクリーン形状情報が格納されたメタデータを、ネットワーク300を介して、取得する。 FIG. 5B shows an example in which the vehicle-mounted projection system 10 is connected to the content server 201 and the metadata server 202 via the network 300. In this case, the content server 201 distributes the content related to the projected image to the vehicle-mounted projection system 10 via the network 300. Then, the in-vehicle projection system 10 acquires the metadata in which the screen shape information is stored, which is associated with the content related to the projected image, from the metadata server 202 via the network 300.
 「スクリーン形状情報をメタデータとして表す例」
 スクリーン形状情報をメタデータとして表す例について説明する。メタデータに格納されるスクリーン形状情報は、例えば、図6(a)に一例を示すように、コンテンツの全区間に対応したスクリーン形状を示す場合、あるいは図6(b)に一例を示すように、コンテンツの分割区間毎のスクリーン形状を示す場合が考えられる。図6(a)の場合、全区間のスクリーン形状が凹面であることを示し、図6(b)の場合、最初の分割区間のスクリーン形状が平面で、次の分割区間のスクリーン形状が凸面で、最後の分割区間のスクリーン形状が平面であることを示している。
"Example of expressing screen shape information as metadata"
An example of expressing screen shape information as metadata will be described. The screen shape information stored in the metadata is, for example, as shown in FIG. 6 (a) as an example, when showing a screen shape corresponding to the entire section of the content, or as shown in FIG. 6 (b) as an example. , It is conceivable to show the screen shape for each divided section of the content. In the case of FIG. 6A, it is shown that the screen shape of the entire section is concave, and in the case of FIG. 6B, the screen shape of the first division section is flat and the screen shape of the next division section is convex. , Indicates that the screen shape of the last divided section is flat.
 メタデータに格納されるスクリーン形状情報は、区間とスクリーン形状の情報を含む情報組が1つまたは複数で構成される。図7(a)は、コンテンツの全区間に対応したスクリーン形状を示す場合(図6(a)参照)におけるメタデータの一例を示している。この場合、シーンは1つであり、メタデータは、シーン「Scene ID = 1」として、開始時刻「00:00:00」・終了時刻「01:00:00」とスクリーン形状「concave」の情報組を持つテーブルで構成される。ここで、シーン「Scene ID = 1」の情報組では「00:00:00」から「01:00:00」までの区間のスクリーン形状は凹面であることが示される。 The screen shape information stored in the metadata is composed of one or more information sets including the section and the screen shape information. FIG. 7A shows an example of metadata in the case of showing the screen shape corresponding to the entire section of the content (see FIG. 6A). In this case, there is only one scene, and the metadata is the scene "Scene ID = 1", the start time "00:00:00", the end time "01:00:00", and the screen shape "concave" information. It consists of a table with pairs. Here, in the information set of the scene "Scene ID = 1", it is shown that the screen shape of the section from "00:00:00" to "01:00:00" is concave.
 図7(b)は、コンテンツの全区間が分割された分割区間毎のスクリーン形状を示す場合(図6(b)参照)におけるメタデータの一例を示している。この場合、シーンは3つであり、メタデータは、シーン「Scene ID = 1」として、開始時刻「00:00:00」、終了時刻「00:30:00」とスクリーン形状「flat」の情報組、シーン「Scene ID = 2」として、開始時刻「00:30:00」、終了時刻「00:40:00」とスクリーン形状「convex」の情報組、シーン「Scene ID = 3」として、開始時刻「00:40:00」、終了時刻「01:00:00」とスクリーン形状「flat」の情報組を持つテーブルで構成される。 FIG. 7 (b) shows an example of metadata in the case where the screen shape for each divided section in which the entire section of the content is divided is shown (see FIG. 6 (b)). In this case, there are three scenes, and the metadata is the scene "Scene ID = 1", the start time "00:00:00", the end time "00:30:00", and the screen shape "flat" information. Start as a set, scene "Scene ID = 2", start time "00:30:00", end time "00:40:00" and screen shape "convex" information set, scene "SceneID = 3" It consists of a table with an information set of time "00:40:00", end time "01:00:00" and screen shape "flat".
 ここで、シーン「Scene ID = 1」の情報組では「00:00:00」から「00:30:00」までの区間のスクリーン形状は平面であることが示される。また、シーン「Scene ID = 2」の情報組では「00:30:00」から「00:40:00」までの区間のスクリーン形状は凸面であることが示される。さらに、シーン「Scene ID = 3」の情報組では「00:40:00」から「01:00:00」までの区間のスクリーン形状は平面であることが示される。 Here, in the information set of the scene "Scene ID = 1", it is shown that the screen shape of the section from "00:00:00" to "00:30:00" is flat. Also, in the information set of the scene "Scene ID = 2", it is shown that the screen shape in the section from "00:30:00" to "00:40:00" is convex. Furthermore, in the information set of the scene "Scene ID = 3", it is shown that the screen shape of the section from "00:40:00" to "01:00:00" is flat.
 なお、スクリーン形状情報は、例えば、図8(a)に一例を示すように、スクリーン105の全領域の形状を示す場合の他に、図8(b)に一例を示すように、スクリーン105の分割領域毎のスクリーン形状を示す場合も考えられる。図8(a)の場合、全領域のスクリーン形状が凹面であることを示し、図8(b)の場合、中央の一部領域の形状が凸面で、その他の領域の形状が平面であることを示している。 In addition to the case where the screen shape information indicates the shape of the entire region of the screen 105 as shown in FIG. 8 (a) as an example, the screen shape information of the screen 105 is shown as an example in FIG. 8 (b). It is also conceivable to show the screen shape for each divided area. In the case of FIG. 8A, it is shown that the screen shape of the entire region is concave, and in the case of FIG. 8B, the shape of a part of the center is convex and the shape of the other regions is flat. Is shown.
 図8(a)に示す全領域凹面を表すために、図9(a)に示すように、全領域凹面のシーンモード「Scene Mode ID = 1」として、開始座標(0,0)、終了座標(100,100)とスクリーン形状「concave」の情報組が定義される。また、図8(b)に示す中央の一部領域が凸面でその他の領域が平面を表すために、図9(a)に示すように、全領域平面のシーンモード「Scene Mode ID = 2」として、開始座標(0,0)、終了座標(100,100)とスクリーン形状「flat」の情報組が定義されると共に、一部領域が凸面のシーンモード「Scene Mode ID = 3」として、開始座標(10,40)、終了座標(60,80)とスクリーン形状「convex」の情報組が定義される。 In order to represent the concave surface of the entire area shown in FIG. 8 (a), as shown in FIG. The information set of (100,100) and the screen shape "concave" is defined. Further, as shown in FIG. 9 (a), the scene mode “Scene Mode ID = 2” of the entire region plane is used because a part of the central region shown in FIG. 8 (b) is a convex surface and the other regions represent a plane. As, the information set of the start coordinate (0,0), the end coordinate (100,100) and the screen shape "flat" is defined, and the start coordinate ( 10,40), the end coordinates (60,80) and the information set of the screen shape "convex" are defined.
 図9(b)は、コンテンツの全区間が分割された分割区間毎のスクリーン形状を示す場合におけるメタデータの一例を示している。図7(b)の例の場合は、各シーンの情報組がスクリーン形状の情報を直接持つが、この図9(b)の例の場合は、図9(a)で定義したシーンモードを持たせている。 FIG. 9B shows an example of metadata in the case of showing the screen shape for each divided section in which the entire section of the content is divided. In the case of the example of FIG. 7 (b), the information set of each scene directly has the information of the screen shape, but in the case of the example of FIG. 9 (b), it has the scene mode defined in FIG. 9 (a). I'm letting you.
 図9(b)に示すメタデータは、シーン「Scene ID = 1」として、開始時刻「00:00:00」、終了時刻「00:30:00」と、「Scene Mode ID(base) = 1」、「Scene Mode ID(top) = 未設定」の情報組、シーン「Scene ID = 2」として、開始時刻「00:30:00」、終了時刻「00:40:00」と、「Scene Mode ID(base) = 2」、「Scene Mode ID(top) = 3」の情報組、シーン「Scene ID = 3」として、開始時刻「00:40:00」、終了時刻「01:00:00」と、「Scene Mode ID(base) = 1」、「Scene Mode ID(top) = 未設定」の情報組を持つテーブルで構成される。この場合、「Scene Mode ID(base)」は、スクリーン105の全領域の形状を示し、「Scene Mode ID(top)」は、全領域のうちの一部領域とその一部領域を上述の「Scene Mode ID(base)」で示される全領域の形状から置き換えるべき形状を示している。 The metadata shown in FIG. 9B is the scene “Scene ID = 1” with a start time “00:00:00”, an end time “00:30:00”, and “Scene Mode ID (base) = 1”. , "Scene Mode ID (top) = Not set", as the scene "Scene ID = 2", start time "00:30:00", end time "00:40:00", and "SceneMode" ID (base) = 2 ”,“ Scene Mode ID (top) = 3 ”information set, scene“ Scene ID = 3 ”, start time“ 00:40:00 ”, end time“ 01:00:00 ” And a table with the information set of "SceneModeID (base) = 1" and "SceneModeID (top) = not set". In this case, "SceneModeID (base)" indicates the shape of the entire area of the screen 105, and "SceneModeID (top)" indicates a part of the whole area and a part of the area as described above. The shape to be replaced from the shape of the entire area indicated by "Scene Mode ID (base)" is shown.
 ここで、シーン「Scene ID = 1」の情報組では「00:00:00」から「00:30:00」までの区間のスクリーン形状は「Scene Mode ID(base) = 1」により全領域凹面であることが示される。また、シーン「Scene ID = 2」の情報組では「00:30:00」から「00:40:00」までの区間のスクリーン形状は「Scene Mode ID(base) = 2」と「Scene Mode ID(top) = 3」により、一部領域が凸面でその他の領域が平面であることが示される。さらに、シーン「Scene ID = 3」の情報組では「00:40:00」から「01:00:00」までの区間のスクリーン形状は「Scene Mode ID(base) = 1」により全領域凹面であることが示される。 Here, in the information set of the scene "Scene ID = 1", the screen shape of the section from "00:00:00" to "00:30:00" is "Scene Mode ID (base) = 1" and the entire area is concave. Is shown to be. Also, in the information set of the scene "Scene ID = 2", the screen shape of the section from "00:30:00" to "00:40:00" is "Scene Mode ID (base) = 2" and "Scene Mode ID". (top) = 3 ”indicates that some areas are convex and others are flat. Furthermore, in the information set of the scene "Scene ID = 3", the screen shape of the section from "00:40:00" to "01:00:00" is "Scene Mode ID (base) = 1" and the entire area is concave. It is shown that there is.
 なお、スクリーン105内でより複雑な変形領域を定義するには、例えば、変形モードのテーブルを拡張するか、あるいは変形モードとビットマップデータで表現してもよい。 In order to define a more complicated transformation area in the screen 105, for example, the transformation mode table may be expanded or expressed by the transformation mode and bitmap data.
 図10のフローチャートは、CPU120におけるコンテンツに基づいたスクリーン形状の変形制御の手順の一例を示している。 The flowchart of FIG. 10 shows an example of the procedure for controlling the deformation of the screen shape based on the content in the CPU 120.
 CPU120は、ステップST1において、処理を開始する。次に、ステップST2において、コンテンツに応じた(対応付けされた)スクリーン形状情報(メタデータ)を取得する。この場合、コンテンツが記憶されている記憶部127から読み出して取得し、またはコンテンツを受信するコンテンツサーバ201、あるいはこのコンテンツサーバ201とは別のメタデータサーバ202から受信して取得する。 The CPU 120 starts processing in step ST1. Next, in step ST2, screen shape information (metadata) corresponding to the content is acquired. In this case, the content is read from the storage unit 127 in which the content is stored and acquired, or the content is received and acquired from the content server 201 that receives the content or a metadata server 202 different from the content server 201.
 次に、CPU120は、ステップST3において、取得したスクリーン形状情報に基づいて、スクリーン105の形状を変形制御する。CPU120は、このステップST3の処理の後、ステップST4において、処理を終了する。 Next, in step ST3, the CPU 120 controls the deformation of the screen 105 based on the acquired screen shape information. After the process of step ST3, the CPU 120 ends the process in step ST4.
 なお、上述実施の形態においては、スクリーン105の形状を、コンテンツに対応付けされたスクリーン形状情報(メタデータ)に基づいて変形制御する例を示した。しかし、コンテンツに基づいてスクリーン105に投影される映像の内容を何らかの方法、例えば映像解析により判別し、その判別結果に基づいてスクリーン105の形状を変形制御することも考えられる。また、例えば、コンテンツに基づいてスクリーン105に投影される映像の内容を乗員(ユーザ)の入力から認識し、その認識結果に基づいてスクリーン105の形状を変形制御することも考えられる。 In the above-described embodiment, an example is shown in which the shape of the screen 105 is deformed and controlled based on the screen shape information (metadata) associated with the content. However, it is also conceivable to discriminate the content of the image projected on the screen 105 based on the content by some method, for example, image analysis, and to control the deformation of the screen 105 based on the discrimination result. Further, for example, it is conceivable to recognize the content of the image projected on the screen 105 based on the content from the input of the occupant (user) and control the deformation of the screen 105 based on the recognition result.
 「観視環境情報に基づく決定・制御」
 次に、スクリーン105の形状がスクリーン105に投影される映像の観視環境情報に基づいて決定・制御される場合について説明する。
"Decision and control based on visual environment information"
Next, a case where the shape of the screen 105 is determined and controlled based on the viewing environment information of the image projected on the screen 105 will be described.
 図11は、乗員の位置からスクリーン105を見上げた場合の状態を示している。図11(a)の場合、スクリーン105が平面であり、乗員の位置からスクリーン105を見上げた場合には角度がきつくて見にくい状態にある。一方、図11(b)の場合、スクリーン105が凹面であり、乗員の位置からスクリーン105を見上げた場合には見やすいものとなる。したがって、この場合は、スクリーン105の形状は凹面に変形させるのが望ましい。 FIG. 11 shows a state when the screen 105 is looked up from the position of the occupant. In the case of FIG. 11A, the screen 105 is a flat surface, and when the screen 105 is looked up from the position of the occupant, the angle is so tight that it is difficult to see. On the other hand, in the case of FIG. 11B, the screen 105 is concave, and it is easy to see when the screen 105 is looked up from the position of the occupant. Therefore, in this case, it is desirable that the shape of the screen 105 is deformed into a concave surface.
 また、図12は、前座席101および後座席102にそれぞれ乗員がいる場合の状態を示している。図12(a)の場合、スクリーン105が平面であり、図12(b)の場合、スクリーン105が凹面である。前座席101の乗員からはスクリーン105の形状が平面より凹面の方が見やすく、逆に後座席102の乗員からは凹面だと角度が浅すぎて平面の方が見やすい場合が考えられる。このような場合、例えば、前座席101、後座席102の乗員の数によってスクリーン105の形状を決定することが考えられる。これにより、前座席101、後座席102の乗員の数に応じてスクリーン形状を適切に決定することができる。 Further, FIG. 12 shows a state when there are occupants in the front seat 101 and the rear seat 102, respectively. In the case of FIG. 12 (a), the screen 105 is a flat surface, and in the case of FIG. 12 (b), the screen 105 is a concave surface. It is conceivable that the occupants of the front seat 101 can easily see the concave surface of the screen 105 rather than the flat surface, and conversely, the occupants of the rear seat 102 can see the concave surface because the angle is too shallow. In such a case, for example, it is conceivable to determine the shape of the screen 105 by the number of occupants in the front seat 101 and the rear seat 102. Thereby, the screen shape can be appropriately determined according to the number of occupants of the front seat 101 and the rear seat 102.
 図13は、前座席101に2人の乗員が座っており、後座席102に1人の乗員が座っている状態を示している。この場合には、より乗員数の多い前座席101から見やすい形状である凹面にスクリーン105を変形させることが望ましい。 FIG. 13 shows a state in which two occupants are sitting in the front seat 101 and one occupant is sitting in the rear seat 102. In this case, it is desirable to deform the screen 105 into a concave surface having a shape that is easy to see from the front seat 101, which has a larger number of passengers.
 図14のフローチャートは、CPU120における前座席101と後座席102に座っている乗員の人数に基づいたスクリーン形状の変形制御の手順の一例を示している。 The flowchart of FIG. 14 shows an example of the procedure for controlling the deformation of the screen shape based on the number of occupants sitting in the front seat 101 and the rear seat 102 in the CPU 120.
 CPU120は、ステップST11において、処理を開始する。次に、CPU120は、ステップST12において、前座席101の乗員の人数と後座席102の乗員の人数をチェックする。この場合、CPU120は、例えば、着座センサー、人感センサー、またはイメージセンサー等のセンサー出力信号に基づいて、各座席の乗員の人数をチェックする。 The CPU 120 starts processing in step ST11. Next, in step ST12, the CPU 120 checks the number of passengers in the front seat 101 and the number of passengers in the rear seat 102. In this case, the CPU 120 checks the number of occupants in each seat based on a sensor output signal such as a seating sensor, a motion sensor, or an image sensor.
 次に、CPU120は、ステップST13において、前座席101の乗員の人数が後座席102の乗員の人数より多いか否かを判断する。前座席101の乗員の人数が後座席102の乗員の人数より多くない場合、CPU120は、ステップST14において、スクリーン105の形状を後座席102に合わせて、つまり後座席102で見やすい形状、例えば平面に決定して、変形制御する。CPU120は、ステップST14の処理の後、ステップST15において、処理を終了する。 Next, in step ST13, the CPU 120 determines whether or not the number of passengers in the front seat 101 is larger than the number of passengers in the rear seat 102. If the number of occupants in the front seat 101 is not greater than the number of occupants in the rear seat 102, the CPU 120 aligns the shape of the screen 105 with the rear seat 102 in step ST14, that is, a shape that is easy to see in the rear seat 102, for example, a flat surface. Determine and control deformation. After the process of step ST14, the CPU 120 ends the process in step ST15.
 一方、ステップST13で前座席101の乗員の人数が後座席102の乗員の人数より多い場合、ステップST16において、スクリーン105の形状を前座席101に合わせて、つまり前座席101で見やすい形状、例えば凹面に決定して、変形制御する。CPU120は、ステップST16の処理の後、ステップST15において、処理を終了する。 On the other hand, when the number of passengers in the front seat 101 is larger than the number of passengers in the rear seat 102 in step ST13, the shape of the screen 105 is matched to the front seat 101 in step ST16, that is, a shape that is easy to see in the front seat 101, for example, a concave surface. And control the deformation. After the processing in step ST16, the CPU 120 ends the processing in step ST15.
 図15は、前座席101と後座席102の乗員の人数に基づいてスクリーン形状を変形させる際のルールの一例を示している。上述したように前座席101の乗員の人数が後座席102の乗員の人数より多くない場合はスクリーン105の形状を後座席102に合わせて変形制御するが、この場合にはスクリーン105は平面とされ、後座席102の乗員が見やすい状態とされる(図12(a)参照)。 FIG. 15 shows an example of a rule for deforming the screen shape based on the number of passengers in the front seat 101 and the rear seat 102. As described above, when the number of passengers in the front seat 101 is not larger than the number of passengers in the rear seat 102, the shape of the screen 105 is deformed and controlled according to the rear seat 102. In this case, the screen 105 is flat. , The occupants of the rear seat 102 are in an easy-to-see state (see FIG. 12A).
 また、上述したように前座席101の乗員の人数が後座席102の乗員の人数より多い場合はスクリーン105の形状を前座席101に合わせて変形制御するが、この場合にはスクリーン105は凹面とされ、前座席101の乗員が見やすい状態とされる(図12(b)、図13参照)。 Further, as described above, when the number of passengers in the front seat 101 is larger than the number of passengers in the rear seat 102, the shape of the screen 105 is deformed and controlled according to the front seat 101, but in this case, the screen 105 is concave. This makes it easier for the occupants of the front seat 101 to see (see FIGS. 12 (b) and 13).
 このように前座席101と後座席102に座っている乗員の人数に基づいてスクリーン形状を変形制御することで、乗員の人数が多い座席位置ではスクリーン投影映像を良好に観視可能となる。 By controlling the deformation of the screen shape based on the number of occupants sitting in the front seat 101 and the rear seat 102 in this way, the screen projection image can be viewed well at the seat position where the number of occupants is large.
 上述では、前座席101と後座席102の乗員の人数に基づいてスクリーン形状を決定する例を示した。しかし、前座席101、後座席102の乗員の属性、例えば要人、会社役員、お客様、上司、先輩、同僚、友人などをさらに加味して、スクリーン105の形状を決定することも考えられる。これにより、前座席101、後座席102に座っている乗員の属性を含めてスクリーン形状を適切に決定することができる。 In the above, an example of determining the screen shape based on the number of passengers in the front seat 101 and the rear seat 102 is shown. However, it is also conceivable to further consider the attributes of the occupants of the front seat 101 and the rear seat 102, such as VIPs, company officers, customers, bosses, seniors, colleagues, and friends, to determine the shape of the screen 105. Thereby, the screen shape can be appropriately determined including the attributes of the occupants sitting in the front seat 101 and the rear seat 102.
 図16は、前座席101に2人の乗員が座っており、後座席102に1人の乗員が座っており、後座席102の1人の乗員は例えば会社役員などの優先乗員である状態を示している。この場合、例えば、前座席101の乗員の人数が後座席102の乗員の人数より多いが、優先乗員が座っている後座席102から見やすい形状である平面にスクリーン105の形状が決定される。 FIG. 16 shows a state in which two occupants are seated in the front seat 101, one occupant is seated in the rear seat 102, and one occupant in the rear seat 102 is a priority occupant such as a company officer. Shows. In this case, for example, the number of occupants in the front seat 101 is larger than the number of occupants in the rear seat 102, but the shape of the screen 105 is determined on a flat surface that is easy to see from the rear seat 102 in which the priority occupant is seated.
 図17のフローチャートは、CPU120における前座席101と後座席102の乗員の人数、さらには乗員の属性に基づいたスクリーン形状の変形制御の手順の一例を示している。 The flowchart of FIG. 17 shows an example of a procedure for controlling deformation of the screen shape based on the number of occupants of the front seat 101 and the rear seat 102 in the CPU 120, and further, the attributes of the occupants.
 CPU120は、ステップST21において、処理を開始する。次に、CPU120は、ステップST22において、前座席101の乗員の人数と後座席102の乗員の人数をチェックする。この場合、CPU120は、例えば、着座センサー、人感センサー、またはイメージセンサー等のセンサー出力信号に基づいて、各座席の乗員の人数をチェックする。 The CPU 120 starts processing in step ST21. Next, in step ST22, the CPU 120 checks the number of passengers in the front seat 101 and the number of passengers in the rear seat 102. In this case, the CPU 120 checks the number of occupants in each seat based on a sensor output signal such as a seating sensor, a motion sensor, or an image sensor.
 次に、CPU120は、ステップST23において、乗員の属性、例えば要人、会社役員、お客様、上司、先輩、同僚、友人などをチェックする。この場合、CPU120は、例えば、イメージセンサーのセンサー出力信号である画像信号に基づいて顔認識処理等を施すことで乗員の属性をチェックすることができる。また、この場合、例えば、乗員が保持しているスマートフォンやウェラブルデバイス等にアクセスすることで乗員の属性をチェックすることができる。 Next, in step ST23, the CPU 120 checks the attributes of the occupants, such as VIPs, company officers, customers, bosses, seniors, colleagues, and friends. In this case, the CPU 120 can check the attributes of the occupant by performing face recognition processing or the like based on the image signal which is the sensor output signal of the image sensor, for example. Further, in this case, for example, the attributes of the occupant can be checked by accessing the smartphone, the wearable device, or the like held by the occupant.
 次に、CPU120は、ステップST24において、乗員の属性情報に基づいて、優先乗員がいるか判断する。優先乗員がいる場合、CPU120は、ステップST25において、スクリーン105の形状を優先乗員の座席位置に合わせて、つまりその座席位置で見やすい形状に決定して、変形制御する。例えば、優先乗員が前座席101に座っている場合には前座席101に合わせて、例えば凹面に変形制御し、優先乗員が見やすい状態とする。また、例えば、優先乗員が後座席102に座っている場合には後座席102に合わせて、例えば平面に変形制御し、優先乗員が見やすい状態とする。CPU120は、ステップST25の処理の後、ステップST26において、処理を終了する。 Next, in step ST24, the CPU 120 determines whether or not there is a priority occupant based on the occupant's attribute information. When there is a priority occupant, the CPU 120 determines in step ST25 the shape of the screen 105 according to the seat position of the priority occupant, that is, a shape that is easy to see at the seat position, and controls the deformation. For example, when the priority occupant is sitting in the front seat 101, the deformation is controlled to be concave, for example, according to the front seat 101 so that the priority occupant can easily see the occupant. Further, for example, when the priority occupant is sitting in the rear seat 102, the deformation is controlled so as to be in line with the rear seat 102, for example, to make it easy for the priority occupant to see. After the process of step ST25, the CPU 120 ends the process in step ST26.
 また、ステップST24で優先乗員がいない場合、CPU120は、ステップST27において、前座席101の乗員の人数が後座席102の乗員の人数より多いか否かを判断する。前座席101の乗員の人数が後座席102の乗員の人数より多くない場合、CPU120は、ステップST28において、スクリーン105の形状を後座席102に合わせて、つまり後座席102で見やすい形状、例えば平面に決定して、変形制御する。CPU120は、ステップST28の処理の後、ステップST26において、処理を終了する。 Further, when there is no priority occupant in step ST24, the CPU 120 determines in step ST27 whether or not the number of occupants in the front seat 101 is larger than the number of occupants in the rear seat 102. If the number of occupants in the front seat 101 is not greater than the number of occupants in the rear seat 102, the CPU 120 aligns the shape of the screen 105 with the rear seat 102 in step ST28, that is, a shape that is easy to see in the rear seat 102, for example, a flat surface. Determine and control deformation. After the process of step ST28, the CPU 120 ends the process in step ST26.
 一方、ステップST27で前座席101の乗員の人数が後座席102の乗員の人数より多い場合、ステップST29において、スクリーン105の形状を前座席101に合わせて、つまり前座席101で見やすい形状、例えば凹面に決定して、変形制御する。CPU120は、ステップST29の処理の後、ステップST26において、処理を終了する。 On the other hand, when the number of passengers in the front seat 101 is larger than the number of passengers in the rear seat 102 in step ST27, the shape of the screen 105 is matched to the front seat 101 in step ST29, that is, a shape that is easy to see in the front seat 101, for example, a concave surface. And control the deformation. After the process of step ST29, the CPU 120 ends the process in step ST26.
 このように前座席101と後座席102の乗員の人数に加えて、乗員の属性に基づいてスクリーン形状を変形制御することで、基本的には乗員の人数が多い座席位置ではスクリーン投影映像を良好に観視可能となるが、乗員の人数が少ない座席位置であってもそこに優先乗員がいればその座席位置でスクリーン投影映像を良好に観視可能となる。 In this way, by controlling the deformation of the screen shape based on the attributes of the occupants in addition to the number of occupants in the front seat 101 and the rear seat 102, the screen projection image is basically good at the seat position where the number of occupants is large. However, even if the number of occupants is small, if there is a priority occupant there, the screen projection image can be viewed well at that seat position.
 上述では、前座席101と後座席102の乗員の人数に基づいてスクリーン形状をいずれかの座席位置に合わせる例を示した。しかし、いずれの座席位置からも見にくくないスクリーン形状に決定することも考えられる。 In the above, an example of adjusting the screen shape to one of the seat positions based on the number of passengers in the front seat 101 and the rear seat 102 is shown. However, it is conceivable to decide on a screen shape that is not difficult to see from any seat position.
 図18は、前座席101に2人の乗員が座っており、後座席102に2人の乗員が座っている状態を示している。この場合、スクリーン105の形状を、前座席101で見やすい凹面(一定鎖線で図示)と後座席102で見やすい平面(二点鎖線で図示)の中間となる形状に決定される。 FIG. 18 shows a state in which two occupants are sitting in the front seat 101 and two occupants are sitting in the rear seat 102. In this case, the shape of the screen 105 is determined to be intermediate between a concave surface (shown by a constant chain line) that is easy to see in the front seat 101 and a flat surface (shown by a two-dot chain line) that is easy to see in the rear seat 102.
 図19のフローチャートは、CPU120における前座席101と後座席102の乗員の人数に基づいたスクリーン形状の変形制御の手順の一例を示している。 The flowchart of FIG. 19 shows an example of the procedure for controlling the deformation of the screen shape based on the number of passengers in the front seat 101 and the rear seat 102 in the CPU 120.
 CPU120は、ステップST31において、処理を開始する。次に、CPU120は、ステップST32において、前座席101の乗員の人数と後座席102の乗員の人数をチェックする。この場合、CPU120は、例えば、着座センサー、人感センサー、またはイメージセンサー等のセンサー出力信号に基づいて、各座席に座っている乗員の人数をチェックする。 The CPU 120 starts processing in step ST31. Next, in step ST32, the CPU 120 checks the number of passengers in the front seat 101 and the number of passengers in the rear seat 102. In this case, the CPU 120 checks the number of occupants sitting in each seat based on a sensor output signal such as a seating sensor, a motion sensor, or an image sensor.
 次に、CPU120は、ステップST33において、前座席101の乗員の人数と後座席102の乗員の人数の大小関係を判断する。前座席101の乗員の人数が後座席102の乗員の人数より少ない場合、CPU120は、ステップST34において、スクリーン105の形状を後座席102に合わせて、つまり後座席102で見やすい形状、例えば平面に決定して、変形制御する。CPU120は、ステップST34の処理の後、ステップST35において、処理を終了する。 Next, in step ST33, the CPU 120 determines the magnitude relationship between the number of passengers in the front seat 101 and the number of passengers in the rear seat 102. When the number of passengers in the front seat 101 is smaller than the number of passengers in the rear seat 102, the CPU 120 determines in step ST34 that the shape of the screen 105 matches the shape of the rear seat 102, that is, a shape that is easy to see in the rear seat 102, for example, a flat surface. Then, the deformation is controlled. After the processing in step ST34, the CPU 120 ends the processing in step ST35.
 また、ステップST33で前座席101の乗員の人数が後座席102の乗員の人数より多い場合、ステップST36において、スクリーン105の形状を前座席101に合わせて、つまり前座席101で見やすい形状、例えば凹面に決定して、変形制御する。CPU120は、ステップST36の処理の後、ステップST35において、処理を終了する。 When the number of passengers in the front seat 101 is larger than the number of passengers in the rear seat 102 in step ST33, the shape of the screen 105 is matched to the front seat 101 in step ST36, that is, a shape that is easy to see in the front seat 101, for example, a concave surface. And control the deformation. After the process of step ST36, the CPU 120 ends the process in step ST35.
 また、ステップST33で前座席101の乗員の人数と後座席102の乗員の人数が同じ場合、ステップST37において、スクリーン105の形状を前席合わせと後席合わせの中間の形状、つまり前座席101で見やすい凹面と後座席102で見やすい平面の中間となる形状に決定して、変形制御する。CPU120は、ステップST37の処理の後、ステップST35において、処理を終了する。 Further, when the number of passengers in the front seat 101 and the number of passengers in the rear seat 102 are the same in step ST33, in step ST37, the shape of the screen 105 is changed to an intermediate shape between the front seat alignment and the rear seat alignment, that is, the front seat 101. Deformation control is performed by determining a shape that is between the concave surface that is easy to see and the flat surface that is easy to see in the rear seat 102. After the process of step ST37, the CPU 120 ends the process in step ST35.
 このように前座席101と後座席102の乗員の人数に基づいてスクリーン形状を変形制御することで、乗員の人数が多い座席位置ではスクリーン投影映像を良好に観視可能となる。また、前座席101と後座席102の乗員の人数が同じ場合には、スクリーン形状を前座席101で見やすい凹面と後座席102で見やすい平面の中間の形状に変形制御するものであり、いずれの座席位置でもそれほど見にくくなく観視可能となる。 By controlling the deformation of the screen shape based on the number of passengers in the front seat 101 and the rear seat 102 in this way, the screen projection image can be viewed well at the seat position where the number of passengers is large. Further, when the number of passengers in the front seat 101 and the rear seat 102 is the same, the screen shape is deformed and controlled to an intermediate shape between a concave surface that is easy to see in the front seat 101 and a flat surface that is easy to see in the rear seat 102. It is not so difficult to see even at the position, and it can be viewed.
 上述では、前座席101と後座席102の乗員の人数に基づいてスクリーン形状を決定する例を示した。しかし、前座席101、後座席102に座っている乗員の状態、例えばリクライニング状態などを加味して、スクリーン105の形状を決定することも考えられる。これにより、前座席101、後座席102に座っている乗員の状態を含めてスクリーン形状を適切に決定することができる。 In the above, an example of determining the screen shape based on the number of passengers in the front seat 101 and the rear seat 102 is shown. However, it is also conceivable to determine the shape of the screen 105 in consideration of the state of the occupants sitting in the front seat 101 and the rear seat 102, for example, the reclining state. Thereby, the screen shape can be appropriately determined including the state of the occupants sitting in the front seat 101 and the rear seat 102.
 図20は、前座席101に2人の乗員が座っており、後座席102に1人の乗員が座っており、前座席101の2人のうち1人はその背もたれを倒してリクライニング状態にある状態を示している。 In FIG. 20, two occupants are seated in the front seat 101, one occupant is seated in the rear seat 102, and one of the two passengers in the front seat 101 is in a reclining state with its backrest tilted down. It shows the state.
 この場合、例えば、前座席101の乗員の人数が後座席102の乗員の人数より多いが、前座席101の2人のうち1人はリクライニング状態にあって、スクリーン形状は後座席102に合わせた平面で見やすくなる。したがって、この場合、前座席101の乗員は2人から1人に調整され、後ろ座席102の乗員は1人から2人に調整され、後座席102から見やすい形状である平面にスクリーン105の形状が決定される。 In this case, for example, the number of passengers in the front seat 101 is larger than the number of passengers in the rear seat 102, but one of the two passengers in the front seat 101 is in the reclining state, and the screen shape is adjusted to the rear seat 102. It is easy to see on a flat surface. Therefore, in this case, the occupants of the front seat 101 are adjusted from two to one, the occupants of the rear seat 102 are adjusted from one to two, and the shape of the screen 105 is formed on a flat surface which is easy to see from the rear seat 102. It is determined.
 図21のフローチャートは、CPU120における前座席101と後座席102に座っている乗員の人数、さらには乗員のリクライニング状態に基づいたスクリーン形状の変形制御の手順の一例を示している。 The flowchart of FIG. 21 shows an example of a procedure for controlling deformation of the screen shape based on the number of occupants sitting in the front seat 101 and the rear seat 102 in the CPU 120, and further, the reclining state of the occupants.
 CPU120は、ステップST41において、処理を開始する。次に、CPU120は、ステップST42において、前座席101の乗員の人数と後座席102の乗員の人数をチェックする。この場合、CPU120は、例えば、着座センサー、人感センサー、またはイメージセンサー等のセンサー出力信号に基づいて、各座席に座っている乗員の人数をチェックする。 The CPU 120 starts processing in step ST41. Next, in step ST42, the CPU 120 checks the number of passengers in the front seat 101 and the number of passengers in the rear seat 102. In this case, the CPU 120 checks the number of occupants sitting in each seat based on a sensor output signal such as a seating sensor, a motion sensor, or an image sensor.
 次に、CPU120は、ステップST43において、前座席101の乗員のリクライニング状態をチェックする。この場合、CPU120は、例えば、リクライニングセンサー、イメージセンサー等のセンサー出力信号に基づいて、前座席101のリクライニング状態をチェックする。 Next, the CPU 120 checks the reclining state of the occupant of the front seat 101 in step ST43. In this case, the CPU 120 checks the reclining state of the front seat 101, for example, based on the sensor output signals of the reclining sensor, the image sensor, and the like.
 次に、CPU120は、ステップST44において、前座席101でリクライニング状態にある乗員の人数を判断する。リクライニング状態にある乗員の人数が0人である場合、CPU120は、直ちに、ステップST45の処理に進む。 Next, in step ST44, the CPU 120 determines the number of occupants in the reclining state in the front seat 101. If the number of occupants in the reclining state is 0, the CPU 120 immediately proceeds to the process of step ST45.
 また、ステップST44でリクライニング状態にある乗員の人数が1人である場合、CPU120は、ステップST46において、前座席101の乗員の人数を1人減らし、後座席102の乗員の人数を1人増やすように、調整をする。CPU120は、ステップST46の処理の後、ステップST45の処理に進む。 Further, when the number of occupants in the reclining state in step ST44 is one, the CPU 120 reduces the number of occupants in the front seat 101 by one and increases the number of occupants in the rear seat 102 by one in step ST46. Make adjustments. The CPU 120 proceeds to the process of step ST45 after the process of step ST46.
 また、ステップST44でリクライニング状態にある乗員の人数が2人である場合、CPU120は、ステップST47において、前座席101の乗員の人数を2人減らし、後座席102の乗員の人数を2人増やすように、調整をする。CPU120は、ステップST47の処理の後、ステップST45の処理に進む。 Further, when the number of occupants in the reclining state in step ST44 is two, the CPU 120 reduces the number of occupants in the front seat 101 by two and increases the number of occupants in the rear seat 102 by two in step ST47. Make adjustments. The CPU 120 proceeds to the process of step ST45 after the process of step ST47.
 ステップST45において、CPU120は、前座席101の乗員の人数が後座席102の乗員の人数より多いか否かを判断する。前座席101の乗員の人数が後座席102の乗員の人数より多くない場合、CPU120は、ステップST48において、スクリーン105の形状を後座席102に合わせて、つまり後座席102で見やすい形状、例えば平面に決定して、変形制御する。CPU120は、ステップST48の処理の後、ステップST49において、処理を終了する。 In step ST45, the CPU 120 determines whether or not the number of passengers in the front seat 101 is larger than the number of passengers in the rear seat 102. If the number of occupants in the front seat 101 is not greater than the number of occupants in the rear seat 102, the CPU 120 aligns the shape of the screen 105 with the rear seat 102 in step ST48, that is, a shape that is easy to see in the rear seat 102, for example, a flat surface. Determine and control deformation. After the processing of step ST48, the CPU 120 ends the processing in step ST49.
 一方、ステップST45で前座席101の乗員の人数が後座席102の乗員の人数より多い場合、ステップST50において、スクリーン105の形状を前座席101に合わせて、つまり前座席101で見やすい形状、例えば凹面に決定して、変形制御する。CPU120は、ステップST50の処理の後、ステップST49において、処理を終了する。 On the other hand, when the number of passengers in the front seat 101 is larger than the number of passengers in the rear seat 102 in step ST45, the shape of the screen 105 is matched to the front seat 101 in step ST50, that is, a shape that is easy to see in the front seat 101, for example, a concave surface. And control the deformation. After the processing of step ST50, the CPU 120 ends the processing in step ST49.
 このように前座席101と後座席102の乗員の人数に加えて、前座席101の乗員のリクライニング状態に基づいてスクリーン形状を変形制御することで、スクリーン形状を適切に決定して変形制御できる。 In this way, by controlling the deformation of the screen shape based on the reclining state of the occupants of the front seat 101 in addition to the number of occupants of the front seat 101 and the rear seat 102, the screen shape can be appropriately determined and the deformation can be controlled.
 なお、乗員の状態としては、上述のリクライニング状態に限定されるものではなく、その他の状態も考えられる。例えば、乗員の状態として、スクリーン105とは別の方向を見ているか、または寝ているかなどの状態も考えられる。このようにスクリーン105を見ていない乗員に関しては、前座席101や後座席102の乗員の人数から減算する調整がされて、スクリーン105の形状が決定される。 The state of the occupant is not limited to the reclining state described above, and other states are also conceivable. For example, the state of the occupant may be a state of looking in a direction different from that of the screen 105 or a state of sleeping. As for the occupants who are not looking at the screen 105 in this way, the shape of the screen 105 is determined by making adjustments by subtracting from the number of occupants in the front seat 101 and the rear seat 102.
 上述では、スクリーン105の形状を前座席101または後座席102に合わせた形状にするか、あるいはその中間の形状にする例を示した。しかし、スクリーン105をより自由に変形させることができる場合には、例えば、前座席101および後座席102のそれぞれの乗員に見やすい形状に決定することも考えられる。 In the above, an example is shown in which the shape of the screen 105 is made to match the shape of the front seat 101 or the rear seat 102, or is made into an intermediate shape. However, if the screen 105 can be deformed more freely, for example, it is conceivable to determine the shape of the front seat 101 and the rear seat 102 so as to be easily seen by the occupants.
 図22は、スクリーン105の形状を、前座席101および後座席102のそれぞれの乗員に最適な凹面形状に変形制御した状態を示している。 FIG. 22 shows a state in which the shape of the screen 105 is deformed and controlled into a concave shape that is optimal for each occupant of the front seat 101 and the rear seat 102.
 上述では、前座席101と後座席102の乗員の人数等に基づいてスクリーン形状を決定する例を示した。しかし、車外空間の状態、例えば走行している場所に応じて、スクリーン105の形状を決定することも考えられる。これにより、走行中の場所に応じて乗員に良好な映像観視体験を提供できる。 In the above, an example of determining the screen shape based on the number of passengers in the front seat 101 and the rear seat 102 is shown. However, it is also conceivable to determine the shape of the screen 105 according to the state of the space outside the vehicle, for example, the place where the vehicle is traveling. As a result, it is possible to provide the occupant with a good video viewing experience depending on the place where the vehicle is traveling.
 図23(a)は、一般の市街を走行している状態を示している。また、図23(b)は、高速道路で森林のなかを走行している状態を示している。なお、これらの図は、乗員とスクリーン105を後方から見た図である。 FIG. 23 (a) shows a state of traveling in a general city. Further, FIG. 23B shows a state of traveling in a forest on a highway. It should be noted that these figures are views of the occupant and the screen 105 as viewed from the rear.
 図23(a)の状態では、スクリーン105の形状は凹面に決定されて変形制御される。この場合、車の側方にある建物や史跡の情報をスクリーン105に表示すると見やすくなる。図23(b)の状態では、車の側方に見るべきものがなく、スクリーン105の形状は平面に決定されて変形制御され、目的地や立ち寄り地の情報が表示されてもよい。 In the state of FIG. 23A, the shape of the screen 105 is determined to be concave and the deformation is controlled. In this case, displaying information on buildings and historic sites on the side of the car on the screen 105 makes it easier to see. In the state of FIG. 23 (b), there is nothing to be seen on the side of the car, the shape of the screen 105 is determined to be a flat surface and deformation control is performed, and information on the destination and the stop-off point may be displayed.
 図24のフローチャートは、CPU120における走行中の場所に基づいたスクリーン形状の変形制御の手順の一例を示している。 The flowchart of FIG. 24 shows an example of the procedure for controlling the deformation of the screen shape based on the traveling location in the CPU 120.
 CPU120は、ステップST61において、処理を開始する。次に、CPU120は、ステップST62において、走行中の場所をチェックする。この場合、CPU120は、例えば、車外空間を撮像するイメージセンサーのセンサー出力信号である画像信号に基づいて画像解析処理を施すことで、あるいはGPS情報やナビゲーションシステムの情報等に基づいて、走行中の場所をチェックする。 The CPU 120 starts processing in step ST61. Next, the CPU 120 checks the traveling location in step ST62. In this case, the CPU 120 is traveling, for example, by performing image analysis processing based on an image signal which is a sensor output signal of an image sensor that images the space outside the vehicle, or based on GPS information, navigation system information, and the like. Check the location.
 次に、CPU120は、ステップST63において、走行中の場所を判断する。走行中の場所が市街である場合、CPU120は、ステップST64において、スクリーン105の形状を凹面に決定して、変形制御する。CPU120は、ステップST64の処理の後、ステップST65において、処理を終了する。 Next, the CPU 120 determines the traveling location in step ST63. When the traveling place is an urban area, the CPU 120 determines the shape of the screen 105 as a concave surface in step ST64 and controls the deformation. After the process of step ST64, the CPU 120 ends the process in step ST65.
 一方、ステップST63で走行場所が森林である場合、CPU120は、ステップST66において、スクリーン105の形状を平面に決定して、変形制御する。CPU120は、ステップST66の処理の後、ステップST65において、処理を終了する。 On the other hand, when the traveling place is a forest in step ST63, the CPU 120 determines the shape of the screen 105 to be a flat surface in step ST66 and controls the deformation. After the process of step ST66, the CPU 120 ends the process in step ST65.
 なお、上述では走行している場所が市街か森林かによりスクリーン105の形状を決定しているが、これは一例であり、走行している場所と変形制御すべきスクリーン形状との対応関係は種々考えられる。 In the above, the shape of the screen 105 is determined depending on whether the traveling place is an urban area or a forest, but this is an example, and there are various correspondences between the traveling place and the screen shape to be deformed and controlled. Conceivable.
 また、上述では、車外空間の状態が走行中の場所(市街、森林など)である例を示したが、車外空間の状態はこれに限定されない。例えば、明るいか暗いかの状態、渋滞しているか否かの状態、なども考えられる。 Further, in the above, the state of the space outside the vehicle is an example of a place where the vehicle is traveling (city, forest, etc.), but the state of the space outside the vehicle is not limited to this. For example, it may be bright or dark, or it may be congested.
 以上説明したように、図1に示す車載投影システム10においては、コンテンツまたはスクリーン105に投影される映像の観視環境情報に基づいてスクリーン形状を決定し、この決定されたスクリーン形状にスクリーン105の形状を変形制御するものであり、乗員に良好な映像観視体験を提供できる。 As described above, in the in-vehicle projection system 10 shown in FIG. 1, the screen shape is determined based on the viewing environment information of the content or the image projected on the screen 105, and the screen 105 is set to the determined screen shape. It controls the deformation of the shape, and can provide the occupant with a good video viewing experience.
 <2.変形例>
 なお、上述実施の形態においては、スクリーン105が車内空間に配置されたものであるが、本技術は、スクリーンが車内空間ではなく、その他の空間に配置される投影システムにあっても同様に適用できる。
<2. Modification example>
In the above-described embodiment, the screen 105 is arranged in the vehicle interior space, but the present technology is similarly applied to a projection system in which the screen is arranged in another space instead of the vehicle interior space. can.
 また、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示の技術的範囲はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、特許請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。 Although the preferred embodiments of the present disclosure have been described in detail with reference to the accompanying drawings, the technical scope of the present disclosure is not limited to such examples. It is clear that anyone with ordinary knowledge in the art of the present disclosure may come up with various modifications or amendments within the scope of the technical ideas set forth in the claims. Is, of course, understood to belong to the technical scope of the present disclosure.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 Further, the effects described in the present specification are merely explanatory or exemplary and are not limited. That is, the technique according to the present disclosure may exert other effects apparent to those skilled in the art from the description of the present specification, in addition to or in place of the above effects.
 また、技術は、以下のような構成もとることができる。
 (1)コンテンツに基づいて表示映像を生成し、該生成された表示映像を所定空間に配置されたスクリーンに投影する第1の処理と、前記コンテンツまたは前記スクリーンに投影される映像の観視環境情報に基づいてスクリーン形状を決定し、該決定されたスクリーン形状に前記スクリーンの形状を変形する第2の処理を制御する制御部を備える
 情報処理装置。
 (2)前記所定空間は、車内空間であり、
 前記スクリーンは、車の天井部分に対応して配置されている
 前記(1)に記載の情報処理装置。
 (3)前記第2の処理では、前記スクリーンの形状変形を、前記車の天井部分と前記スクリーンの間に配置された変形機構により行われる
 前記(2)に記載の情報処理装置。
 (4)前記第2の処理では、前記スクリーンに投影される映像の内容に基づいて前記スクリーン形状を決定する
 前記(1)から(3)のいずれかに記載の情報処理装置。
 (5)前記第2の処理では、前記コンテンツに対応付けされた、スクリーン形状情報が格納されたメタデータを取得し、該取得されたメタデータに基づいて前記スクリーン形状を決定する
 前記(1)から(4)のいずれかに記載の情報処理装置。
 (6)前記スクリーン形状情報は、区間とスクリーン形状の情報を含む情報組が1つまたは複数で構成される
 前記(5)に記載の情報処理装置。
 (7)前記スクリーン形状の情報は、前記スクリーンの全領域の形状または前記スクリーンの分割された各領域の形状を示す
 前記(6)に記載の情報処理装置。
 (8)前記第2の処理では、前記メタデータを、ネットワークを介して、前記コンテンツを取得するコンテンツサーバまたは該コンテンツサーバとは異なるメタデータサーバから取得する
 前記(5)から(7)のいずれかに記載の情報処理装置。
 (9)前記メタデータに格納されるスクリーン形状情報は、前記コンテンツサーバまたは前記メタデータサーバにおいて、多人数が前記コンテンツに基づく映像をスクリーンに投影する際に使用したスクリーン形状の履歴に基づいて設定される
 前記(8)に記載の情報処理装置。
 (10)前記観視環境情報は、前記スクリーンに投影された映像を観視する複数の観視位置と各観視位置にいる観視者の人数の情報を含む
 前記(1)から(3)のいずれかに記載の情報処理装置。
 (11)前記第2の処理では、観視者の人数が最も多い観視位置に合うように前記スクリーン形状を決定する
 前記(10)に記載の情報処理装置。
 (12)前記所定区間は、車内空間であり、
 前記スクリーンは、車の天井部分に配置されており、
 前記観視環境情報は、前記車の前後方向の座席位置と各座席位置にいる乗員の人数の情報を含む
 前記(10)または(11)に記載の情報処理装置。
 (13)前記観視環境情報は、前記乗員の状態情報をさらに含む
 前記(12)に記載の情報処理装置。
 (14)前記第2の処理では、前記乗員の状態情報により前記各座席位置にいる乗員の人数を調整し、調整後に乗員の人数が最も多い座席位置に合うように前記スクリーン形状を決定する
 前記(13)に記載の情報処理装置。
 (15)前記観視環境情報は、各観視位置にいる観視者の属性情報をさらに含む
 前記(10)に記載の情報処理装置。
 (16)前記第2の処理では、前記属性情報で示される優先観視者が存在するとき、該優先観視者がいる観視位置に合うように前記スクリーン形状を決定する
 前記(15)に記載の情報処理装置。
 (17)前記所定区間は、車内空間であり、
 前記スクリーンは、車の天井部分に配置されており、
 前記観視環境情報は、走行中の場所の情報を含む
 前記(1)から(3)のいずれかに記載の情報処理装置。
 (18)前記制御部は、前記スクリーンに投影される映像の観視中における前記スクリーンの形状の変更情報を前記コンテンツに関連付けて記録する処理をさらに制御する
 前記(1)に記載の情報処理装置。
 (19)コンテンツに基づいて表示映像を生成し、該生成された表示映像を所定空間に配置されたスクリーンに投影する処理を制御する手順と、
 前記コンテンツまたは前記スクリーンに投影される映像の観視環境情報に基づいてスクリーン形状を決定し、該決定されたスクリーン形状に前記スクリーンの形状を変形する処理を制御する手順を有する
 情報処理方法。
 (20)コンピュータを、
 コンテンツに基づいて表示映像を生成し、該生成された表示映像を所定空間に配置されたスクリーンに投影する第1の処理と、前記コンテンツまたは前記スクリーンに投影される映像の観視環境情報に基づいてスクリーン形状を決定し、該決定されたスクリーン形状に前記スクリーンの形状を変形する第2の処理を制御する制御手段として機能させる
 プログラム。
In addition, the technology can have the following configurations.
(1) The first process of generating a display image based on the content and projecting the generated display image on a screen arranged in a predetermined space, and a viewing environment of the content or the image projected on the screen. An information processing device including a control unit that determines a screen shape based on information and controls a second process of transforming the screen shape into the determined screen shape.
(2) The predetermined space is an interior space of the vehicle.
The information processing device according to (1) above, wherein the screen is arranged corresponding to a ceiling portion of a car.
(3) The information processing apparatus according to (2), wherein in the second process, the shape of the screen is deformed by a deformation mechanism arranged between the ceiling portion of the car and the screen.
(4) The information processing apparatus according to any one of (1) to (3), wherein in the second process, the screen shape is determined based on the content of an image projected on the screen.
(5) In the second process, the metadata in which the screen shape information associated with the content is stored is acquired, and the screen shape is determined based on the acquired metadata (1). The information processing apparatus according to any one of (4).
(6) The information processing apparatus according to (5) above, wherein the screen shape information is composed of one or a plurality of information sets including a section and screen shape information.
(7) The information processing apparatus according to (6) above, wherein the information on the screen shape indicates the shape of the entire area of the screen or the shape of each divided area of the screen.
(8) In the second process, any of the above (5) to (7) of acquiring the metadata from a content server for acquiring the content or a metadata server different from the content server via a network. Information processing device described in Crab.
(9) The screen shape information stored in the metadata is set based on the history of the screen shape used by a large number of people in the content server or the metadata server when projecting an image based on the content on the screen. The information processing apparatus according to (8) above.
(10) The viewing environment information includes information on a plurality of viewing positions for viewing an image projected on the screen and the number of viewers at each viewing position (1) to (3). The information processing device described in any of the above.
(11) The information processing apparatus according to (10), wherein in the second process, the screen shape is determined so as to match the viewing position where the number of viewers is the largest.
(12) The predetermined section is an interior space of the vehicle.
The screen is placed on the ceiling of the car.
The information processing device according to (10) or (11), wherein the viewing environment information includes information on a seat position in the front-rear direction of the vehicle and the number of occupants in each seat position.
(13) The information processing apparatus according to (12), wherein the viewing environment information further includes state information of the occupant.
(14) In the second process, the number of occupants in each seat position is adjusted based on the occupant status information, and after the adjustment, the screen shape is determined so as to match the seat position with the largest number of occupants. The information processing apparatus according to (13).
(15) The information processing device according to (10) above, wherein the viewing environment information further includes attribute information of a viewer at each viewing position.
(16) In the second process, when the priority viewer indicated by the attribute information is present, the screen shape is determined so as to match the viewing position where the priority viewer is present. The information processing device described.
(17) The predetermined section is an interior space of the vehicle.
The screen is placed on the ceiling of the car.
The information processing device according to any one of (1) to (3) above, wherein the visual environment information includes information on a place where the vehicle is traveling.
(18) The information processing apparatus according to (1), wherein the control unit further controls a process of recording information on changes in the shape of the screen in association with the content while viewing an image projected on the screen. ..
(19) A procedure for controlling a process of generating a display image based on the content and projecting the generated display image on a screen arranged in a predetermined space.
An information processing method comprising a procedure of determining a screen shape based on the viewing environment information of the content or an image projected on the screen, and controlling a process of transforming the shape of the screen into the determined screen shape.
(20) Computer
Based on the first process of generating a display image based on the content and projecting the generated display image onto a screen arranged in a predetermined space, and the viewing environment information of the content or the image projected on the screen. A program that determines the screen shape and functions as a control means for controlling a second process of transforming the screen shape into the determined screen shape.
 10・・・車載投影システム
 100・・・車
 101・・・前座席(前席シート)
 102・・・後座席(後席シート)
 103・・・天井部分
 105・・・スクリーン
 106・・・変形機構
 107・・・プロジェクター
 108・・・アクチュエータ
 109・・・エアバルーン
 110・・・センサー群
 120・・・CPU
 121・・・ROM
 122・・・RAM
 123・・・バス
 124・・・入出力インターフェース
 125・・・操作部
 126・・・入出力部
 127・・・記憶部
 128・・・表示部
 129・・・投影部
 130・・・通信部
 201・・・コンテンツサーバ
 202・・・メタデータサーバ
 300・・・ネットワーク
10 ... In-vehicle projection system 100 ... Car 101 ... Front seat (front seat)
102 ... Rear seat (rear seat)
103 ... Ceiling part 105 ... Screen 106 ... Deformation mechanism 107 ... Projector 108 ... Actuator 109 ... Air balloon 110 ... Sensor group 120 ... CPU
121 ... ROM
122 ... RAM
123 ... Bus 124 ... Input / output interface 125 ... Operation unit 126 ... Input / output unit 127 ... Storage unit 128 ... Display unit 129 ... Projection unit 130 ... Communication unit 201・ ・ ・ Content server 202 ・ ・ ・ Metadata server 300 ・ ・ ・ Network

Claims (20)

  1.  コンテンツに基づいて表示映像を生成し、該生成された表示映像を所定空間に配置されたスクリーンに投影する第1の処理と、前記コンテンツまたは前記スクリーンに投影される映像の観視環境情報に基づいてスクリーン形状を決定し、該決定されたスクリーン形状に前記スクリーンの形状を変形する第2の処理を制御する制御部を備える
     情報処理装置。
    Based on the first process of generating a display image based on the content and projecting the generated display image onto a screen arranged in a predetermined space, and the viewing environment information of the content or the image projected on the screen. An information processing apparatus including a control unit that controls a second process of determining a screen shape and transforming the screen shape into the determined screen shape.
  2.  前記所定空間は、車内空間であり、
     前記スクリーンは、車の天井部分に対応して配置されている
     請求項1に記載の情報処理装置。
    The predetermined space is an interior space of the vehicle.
    The information processing device according to claim 1, wherein the screen is arranged corresponding to a ceiling portion of a car.
  3.  前記第2の処理では、前記スクリーンの形状変形を、前記車の天井部分と前記スクリーンの間に配置された変形機構により行われる
     請求項2に記載の情報処理装置。
    The information processing apparatus according to claim 2, wherein in the second process, the shape of the screen is deformed by a deformation mechanism arranged between the ceiling portion of the car and the screen.
  4.  前記第2の処理では、前記スクリーンに投影される映像の内容に基づいて前記スクリーン形状を決定する
     請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, wherein in the second process, the screen shape is determined based on the content of an image projected on the screen.
  5.  前記第2の処理では、前記コンテンツに対応付けされた、スクリーン形状情報が格納されたメタデータを取得し、該取得されたメタデータに基づいて前記スクリーン形状を決定する
     請求項1に記載の情報処理装置。
    The information according to claim 1, wherein in the second process, metadata associated with the content and storing screen shape information is acquired, and the screen shape is determined based on the acquired metadata. Processing equipment.
  6.  前記スクリーン形状情報は、区間とスクリーン形状の情報を含む情報組が1つまたは複数で構成される
     請求項5に記載の情報処理装置。
    The information processing apparatus according to claim 5, wherein the screen shape information includes one or a plurality of information sets including a section and screen shape information.
  7.  前記スクリーン形状の情報は、前記スクリーンの全領域の形状または前記スクリーンの分割された各領域の形状を示す
     請求項6に記載の情報処理装置。
    The information processing apparatus according to claim 6, wherein the information on the screen shape indicates the shape of the entire area of the screen or the shape of each divided area of the screen.
  8.  前記第2の処理では、前記メタデータを、ネットワークを介して、前記コンテンツを取得するコンテンツサーバまたは該コンテンツサーバとは異なるメタデータサーバから取得する
     請求項5に記載の情報処理装置。
    The information processing apparatus according to claim 5, wherein in the second process, the metadata is acquired from a content server that acquires the content or a metadata server different from the content server via a network.
  9.  前記メタデータに格納されるスクリーン形状情報は、前記コンテンツサーバまたは前記メタデータサーバにおいて、多人数が前記コンテンツに基づく映像をスクリーンに投影する際に使用したスクリーン形状の履歴に基づいて設定される
     請求項8に記載の情報処理装置。
    The screen shape information stored in the metadata is set based on the history of the screen shape used by a large number of people when projecting an image based on the content on the screen in the content server or the metadata server. Item 8. The information processing apparatus according to Item 8.
  10.  前記観視環境情報は、前記スクリーンに投影された映像を観視する複数の観視位置と各観視位置にいる観視者の人数の情報を含む
     請求項1に記載の情報処理装置。
    The information processing device according to claim 1, wherein the viewing environment information includes information on a plurality of viewing positions for viewing an image projected on the screen and the number of viewers at each viewing position.
  11.  前記第2の処理では、観視者の人数が最も多い観視位置に合うように前記スクリーン形状を決定する
     請求項10に記載の情報処理装置。
    The information processing apparatus according to claim 10, wherein in the second process, the screen shape is determined so as to fit the viewing position where the number of viewers is the largest.
  12.  前記所定区間は、車内空間であり、
     前記スクリーンは、車の天井部分に配置されており、
     前記観視環境情報は、前記車の前後方向の座席位置と各座席位置にいる乗員の人数の情報を含む
     請求項10に記載の情報処理装置。
    The predetermined section is the space inside the vehicle.
    The screen is placed on the ceiling of the car.
    The information processing device according to claim 10, wherein the viewing environment information includes information on seat positions in the front-rear direction of the vehicle and the number of occupants in each seat position.
  13.  前記観視環境情報は、前記乗員の状態情報をさらに含む
     請求項12に記載の情報処理装置。
    The information processing device according to claim 12, wherein the visual environment information further includes state information of the occupant.
  14.  前記第2の処理では、前記乗員の状態情報により前記各座席位置にいる乗員の人数を調整し、調整後に乗員の人数が最も多い座席位置に合うように前記スクリーン形状を決定する
     請求項13に記載の情報処理装置。
    In the second process, the number of occupants in each seat position is adjusted based on the occupant's state information, and after the adjustment, the screen shape is determined so as to match the seat position with the largest number of occupants. The information processing device described.
  15.  前記観視環境情報は、各観視位置にいる観視者の属性情報をさらに含む
     請求項10に記載の情報処理装置。
    The information processing device according to claim 10, wherein the viewing environment information further includes attribute information of a viewer at each viewing position.
  16.  前記第2の処理では、前記属性情報で示される優先観視者が存在するとき、該優先観視者がいる観視位置に合うように前記スクリーン形状を決定する
     請求項15に記載の情報処理装置。
    The information processing according to claim 15, wherein in the second process, when the priority viewer indicated by the attribute information is present, the screen shape is determined so as to match the viewing position where the priority viewer is present. Device.
  17.  前記所定区間は、車内空間であり、
     前記スクリーンは、車の天井部分に配置されており、
     前記観視環境情報は、走行中の場所の情報を含む
     請求項1に記載の情報処理装置。
    The predetermined section is the space inside the vehicle.
    The screen is placed on the ceiling of the car.
    The information processing device according to claim 1, wherein the visual environment information includes information on a traveling place.
  18.  前記制御部は、前記スクリーンに投影される映像の観視中における前記スクリーンの形状の変更情報を前記コンテンツに関連付けて記録する処理をさらに制御する
     請求項1に記載の情報処理装置。
    The information processing device according to claim 1, wherein the control unit further controls a process of recording a change information of the shape of the screen in association with the content while viewing an image projected on the screen.
  19.  コンテンツに基づいて表示映像を生成し、該生成された表示映像を所定空間に配置されたスクリーンに投影する処理を制御する手順と、
     前記コンテンツまたは前記スクリーンに投影される映像の観視環境情報に基づいてスクリーン形状を決定し、該決定されたスクリーン形状に前記スクリーンの形状を変形する処理を制御する手順を有する
     情報処理方法。
    A procedure for controlling the process of generating a display image based on the content and projecting the generated display image on a screen arranged in a predetermined space, and
    An information processing method comprising a procedure of determining a screen shape based on the viewing environment information of the content or an image projected on the screen, and controlling a process of transforming the shape of the screen into the determined screen shape.
  20.  コンピュータを、
     コンテンツに基づいて表示映像を生成し、該生成された表示映像を所定空間に配置されたスクリーンに投影する第1の処理と、前記コンテンツまたは前記スクリーンに投影される映像の観視環境情報に基づいてスクリーン形状を決定し、該決定されたスクリーン形状に前記スクリーンの形状を変形する第2の処理を制御する制御手段として機能させる
     プログラム。
    Computer,
    Based on the first process of generating a display image based on the content and projecting the generated display image onto a screen arranged in a predetermined space, and the viewing environment information of the content or the image projected on the screen. A program that determines the screen shape and functions as a control means for controlling a second process of transforming the screen shape into the determined screen shape.
PCT/JP2021/022935 2020-06-30 2021-06-16 Information processing device, information processing method, and program WO2022004394A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-112638 2020-06-30
JP2020112638 2020-06-30

Publications (1)

Publication Number Publication Date
WO2022004394A1 true WO2022004394A1 (en) 2022-01-06

Family

ID=79316103

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/022935 WO2022004394A1 (en) 2020-06-30 2021-06-16 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2022004394A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003114635A (en) * 2001-10-05 2003-04-18 Toppan Printing Co Ltd Information display medium which is thin and flexible, method for providing information by using the same and information providing system
JP2008107537A (en) * 2006-10-25 2008-05-08 Matsushita Electric Works Ltd Video display system
JP2009139605A (en) * 2007-12-06 2009-06-25 Hitachi Ltd Method of displaying content in vehicle
EP2809068A1 (en) * 2013-05-31 2014-12-03 LG Electronics, Inc. Image display device and method of controlling the same
US20170212415A1 (en) * 2014-07-15 2017-07-27 Cj Cgv Co., Ltd. Variable screen system
JP2019191947A (en) * 2018-04-25 2019-10-31 パイオニア株式会社 Information processing device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003114635A (en) * 2001-10-05 2003-04-18 Toppan Printing Co Ltd Information display medium which is thin and flexible, method for providing information by using the same and information providing system
JP2008107537A (en) * 2006-10-25 2008-05-08 Matsushita Electric Works Ltd Video display system
JP2009139605A (en) * 2007-12-06 2009-06-25 Hitachi Ltd Method of displaying content in vehicle
EP2809068A1 (en) * 2013-05-31 2014-12-03 LG Electronics, Inc. Image display device and method of controlling the same
US20170212415A1 (en) * 2014-07-15 2017-07-27 Cj Cgv Co., Ltd. Variable screen system
JP2019191947A (en) * 2018-04-25 2019-10-31 パイオニア株式会社 Information processing device

Similar Documents

Publication Publication Date Title
KR102432614B1 (en) A shared experience for vehicle occupants and remote users
US9864559B2 (en) Virtual window display system
CN111480194B (en) Information processing device, information processing method, program, display system, and moving object
JP4700539B2 (en) Display device
JP6931801B2 (en) In-vehicle display system and control method of this in-vehicle display system
US20230093446A1 (en) Information processing device, information processing method, and program
JP2018038009A (en) Image output device and image output method
WO2022004394A1 (en) Information processing device, information processing method, and program
US20230221798A1 (en) System for controlling media play
JP2007008354A (en) Input/output control device
CN116775189A (en) Interactive control device, method, vehicle-mounted equipment, vehicle and storage medium
TW201926050A (en) Vehicle multi-display control system and vehicle multi-display control method
KR20230108812A (en) System for controlling vehicle display
KR20230108810A (en) System for controlling vehicle display
KR20230108648A (en) System for controlling media play
KR20230108651A (en) System for controlling vehicle display
KR20230108811A (en) System for controlling vehicle display
KR20230108646A (en) System for controlling vehicle display
KR20230108649A (en) System for hazard determination and warining
KR20230108647A (en) System for controlling media play
KR20230108654A (en) System for controlling vehicle display
KR20230108652A (en) System for controlling vehicle display
KR20230108650A (en) Vehicle control system for reducing motion sickness
KR20230108655A (en) Purpose built vehicle
CN118494350A (en) Vehicle side window display system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21832173

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21832173

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP