WO2023190344A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2023190344A1
WO2023190344A1 PCT/JP2023/012203 JP2023012203W WO2023190344A1 WO 2023190344 A1 WO2023190344 A1 WO 2023190344A1 JP 2023012203 W JP2023012203 W JP 2023012203W WO 2023190344 A1 WO2023190344 A1 WO 2023190344A1
Authority
WO
WIPO (PCT)
Prior art keywords
avatar
exhibitor
visitor
booth
image
Prior art date
Application number
PCT/JP2023/012203
Other languages
French (fr)
Japanese (ja)
Inventor
譲治 堀
Original Assignee
株式会社ジクウ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ジクウ filed Critical 株式会社ジクウ
Publication of WO2023190344A1 publication Critical patent/WO2023190344A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Definitions

  • the present invention relates to an information processing device, an information processing method, and a program.
  • the settings are simply based on an image captured from a viewpoint of a certain position (coordinates) in a predetermined three-dimensional virtual space, for example, an avatar existing in the virtual space.
  • an avatar viewpoint image only an image captured from a given viewpoint (hereinafter referred to as an "avatar viewpoint image" was generated and viewed by the user.
  • avatar viewpoint image When viewing an avatar perspective image, the user knows that it includes objects that exist in the virtual space (other users' avatars, walls, etc.), but it is difficult to know which object to take an action on (for example, talking to). It was difficult to understand. That is, in the conventional technology including the technology described in Patent Document 1, the convenience in a system using an avatar viewpoint image in a virtual space is not sufficient.
  • the present invention has been made in view of this situation, and aims to improve the convenience of a system that uses avatar viewpoint images in a virtual space.
  • an information processing device includes: Arrangement means for arranging, in a three-dimensional virtual space, an exhibition hall in which a booth is arranged, an exhibitor avatar existing as an exhibitor in the booth, and a visitor avatar movable as a visitor within the exhibition hall. and, An exhibitor avatar that generates the exhibitor avatar as an avatar composed of two-dimensional information based on a first instruction to generate the exhibitor avatar among instructions performed by a first user's operation on a first device. a generating means; A visitor who moves the avatar to a predetermined position in the virtual space based on a second instruction to move the avatar to a predetermined position in the exhibition hall among instructions performed by a second user's operation on a second device.
  • the coordinates are image generation means for generating a three-dimensional image captured from a viewpoint set based on the image as a visitor viewpoint image; a display control unit that executes control to display the visitor viewpoint image on the second device; Equipped with
  • An information processing method and program according to one embodiment of the present invention are methods and programs corresponding to an information processing apparatus according to one embodiment of the present invention.
  • FIG. 2 is a diagram showing an example of a screen displayed in a service provided using a server according to an embodiment of an information processing device of the present invention.
  • FIG. 2 is a diagram illustrating an example of a screen displayed in a service provided using a server according to an embodiment of the information processing device of the present invention, which is different from FIG. 1;
  • 1 is a diagram illustrating an example of the configuration of an information processing system to which a server according to an embodiment of an information processing apparatus of the present invention is applied.
  • 4 is a block diagram showing an example of the hardware configuration of a server in the information processing system shown in FIG. 3.
  • FIG. FIG. 5 is a functional block diagram showing an example of the functional configuration of the server in FIG. 4 that constitutes the information processing system in FIG.
  • FIG. 6 is a diagram illustrating an example of an image showing a screen for performing various settings regarding a booth including an avatar, among images displayed on an exhibitor terminal under the control of the server having the functional configuration of FIG. 5.
  • FIG. 6 is a diagram illustrating an example of an image showing a screen for providing visitor information to an exhibitor, among images displayed on an exhibitor terminal under the control of the server having the functional configuration of FIG. 5.
  • FIG. 6 is a diagram illustrating an example of an image showing a screen including an attendant for visitors among images displayed on an exhibitor terminal under the control of the server having the functional configuration of FIG. 5.
  • FIG. 1 is a diagram showing an example of a screen displayed in a service provided using a server according to an embodiment of an information processing apparatus of the present invention.
  • this service provides support to organizers (for example, organizer US in Figure 2), exhibitors (for example, exhibitors UT-1 to UT-n in Figure 2), at an exhibition held in a three-dimensional virtual space. and services provided to visitors (for example, visitors UR-1 to UR-m in FIG. 2).
  • an exhibition hall TH set by the organizer is arranged in the three-dimensional virtual space.
  • a plurality of booths B are arranged where products, etc. (content) of a plurality of exhibitors are exhibited.
  • a visitor operates a terminal (for example, visitor terminals 4-1 to 4-m in FIG. 2) to move within the exhibition hall TH as an avatar A and visit a booth B of interest.
  • Figure 1 shows an example of a three-dimensional image GA (hereinafter referred to as "avatar perspective image GA") obtained as a result of capturing a three-dimensional virtual space from a viewpoint based on avatar A, displayed on a visitor's terminal. It is shown.
  • the visitor operates the terminal to visit booth B located in exhibition hall TH, view the exhibits in booth B, and receive explanations from exhibitors.
  • the avatar placed in the three-dimensional virtual space is usually a 3D avatar.
  • the visitor's avatar AR (hereinafter referred to as “visitor avatar AR") is a 3D avatar. That is, the visitor avatar AR is an avatar made up of three-dimensional polygons.
  • the exhibitor's avatar AS (hereinafter referred to as “exhibitor avatar AS”) is a live-action 2D avatar. That is, the exhibitor avatar AS employs a two-dimensional photographic image.
  • the exhibitor's avatar AS displays a message in the form of a speech bubble that reads, ⁇ This is the booth of ⁇ , which provides 3D online event services.'' That is, a visitor who is about to stop by booth B is provided with a line from the exhibitor's avatar AS.
  • the visitor avatar AR and exhibitor avatar AS in the avatar viewpoint image GA will be explained in more detail.
  • the exhibitor avatar AS is a two-dimensional image, it will be included in the avatar viewpoint image GA with high image quality. As a result, the visitors can understand the exhibitor avatar AS through a live-action 2D avatar (photo image) that is more realistic, and therefore can feel a sense of familiarity. As a result, it becomes easier for visitors to talk to exhibitors.
  • a 3D avatar that has a large number of polygons and can have high image quality on the avatar viewpoint image GA, similar to the live-action 2D avatar.
  • employing such a 3D avatar for both the exhibitor avatar AS and the visitor avatar AR imposes an extremely high performance burden. Therefore, in this service, a live-action 2D avatar is used as the exhibitor avatar AS. This has the effect of reducing the performance burden and making it easier for the above-mentioned visitors to talk to exhibitors.
  • this service uses an avatar made of rough three-dimensional polygons as shown in Figure 1 as the visitor avatar AR. This reduces the performance burden. As a result, it is possible to reduce the performance load and to install a large number of avatars within the exhibition hall TH.
  • the visitor avatar AR is an avatar made up of three-dimensional polygons, and has different properties from the exhibitor avatar AS.
  • the exhibitor avatar AS stands out in the avatar viewpoint image GA, making it easier for visitors to grasp the exhibitor with whom they can talk.
  • a difference in characteristics between the visitor avatar AR and the exhibitor avatar AS a difference in how the avatars appear in the avatar viewpoint image GA will be explained.
  • the exhibitor avatar AS is viewed from the visitor avatar AR.
  • the exhibitor avatar AS is a live-action 2D avatar (a two-dimensional photographic image). Therefore, as described above, the exhibitor avatar AS is included in the avatar viewpoint image GA as a high-quality photograph. Furthermore, a live-action 2D avatar (a two-dimensional photographic image) has no thickness in a three-dimensional virtual space. Therefore, if the exhibitor avatar is fixed in the three-dimensional virtual space, when the visitor avatar AR moves from facing the exhibitor avatar AS to the side of the exhibitor avatar AS, The exhibitor's avatar AS appears to have perspective, becomes thinner, or becomes thinner and invisible from the side.
  • this service has a "2D avatar automatic direction change function" in order to obtain an avatar image more suitable for visitors.
  • the "2D avatar automatic direction change function” will be described below with further reference to FIG. 2.
  • FIG. 2 is an example of a screen displayed in a service provided using a server according to an embodiment of the information processing device of the present invention, and is a diagram showing an example different from FIG. 1.
  • the visitor avatar AR in FIG. 2 has moved to the right (moved counterclockwise when looking at the exhibition hall TH from above) with booth B, exhibitor avatar AS, etc. in the center compared to FIG. 1. That is, in both the avatar viewpoint images GA shown in FIGS. 1 and 2, the exhibitor avatar AS is directly facing the visitor avatar. That is, in this service, the direction of the exhibitor avatar AS is automatically changed by the automatic direction change function of the 2D avatar so that it is directly facing the position of the visitor avatar AR. As a result, the exhibitor avatar AS does not appear perspective or thin, or becomes thin and invisible from the side, making it easy to grasp the position of the exhibitor avatar AS in the avatar viewpoint image GA.
  • the visitor avatar AR when it moves, it is reflected in the avatar viewpoint image GA as an oblique view or side view of another visitor avatar AR.
  • the exhibitor avatar AS when the visitor avatar AR moves, the exhibitor avatar AS is reflected in the avatar viewpoint image GA as always facing directly. This further facilitates understanding of the position of the exhibitor avatar AS within the avatar viewpoint image GA.
  • Each of the organizers, exhibitors, and participants uses their respective terminals to manage the services provided by the provider of this service (hereinafter referred to as the "service provider") via the Internet using browser functions, etc.
  • This service can be used by connecting to a server (for example, server 1 in FIG. 2, which will be described later). That is, this service is realized as a so-called cloud service.
  • this service may be made available by using dedicated application software (hereinafter referred to as a "dedicated application”) installed in advance on the terminal.
  • the organizer operates a terminal to determine the shape of the exhibition hall TH and the layout of the booth B.
  • the organizer then invites multiple exhibitors, negotiates with them, and decides on booth placement.
  • the organizer operates the terminal to display a management screen, and uses the management screen to arrange the booths B of the plurality of exhibitors in the exhibition hall TH based on the booth arrangement.
  • the exhibitor operates the terminal to display the management screen, and makes various settings for his or her booth B on the management screen.
  • the exhibitor operates the terminal to set the avatar of the person in charge who will give explanations etc. at booth B, that is, the above-mentioned exhibitor avatar AS.
  • exhibitor avatar AS by selecting an arbitrary avatar from the prepared exhibitor avatar AS or by uploading a photo of the person in charge taken in advance. Set. This allows the participant (Avatar A) to visit the exhibition hall TH in the three-dimensional virtual space.
  • avatar A After the visitor operates a terminal and pre-registers, he or she can log in on the day of the exhibition (event) and move around the exhibition hall TH as avatar A. The movement of avatar A is displayed on the terminal as an avatar viewpoint image GA. Visitors can operate terminals to move around the exhibition hall TH, visit booths B that they wish to visit, view exhibits, listen to exhibitors' explanations, and so on.
  • FIG. 2 is a diagram illustrating an example of the configuration of an information processing system to which a server according to an embodiment of the information processing apparatus of the present invention is applied.
  • the information processing system shown in FIG. (m is an integer value of 1 or more independent of n).
  • the server 1, organizer terminal 2, exhibitor terminals 3-1 to 3-n, and visitor terminals 4-1 to 4-m are interconnected via a predetermined network NW such as the Internet.
  • the server 1 is an information processing device managed by a service provider.
  • the server 1 executes various processes to realize this service while communicating with the organizer terminal 2, exhibitor terminals 3-1 to 3-n, and visitor terminals 4-1 to 4-m as appropriate. .
  • the organizer terminal 2 is an information processing device operated by the organizer US.
  • the organizer terminal 2 is composed of a personal computer, a tablet, a smartphone, etc.
  • the organizer terminal 2 receives various information input operations by the organizer US and transmits them to the server 1, and receives and displays various information transmitted from the server 1, for example.
  • Each of the exhibitor terminals 3-1 to 3-n is an information processing device operated by each of the exhibitors UT-1 to UT-n.
  • the exhibitor terminal 3 is composed of a personal computer, a tablet, a smartphone, etc. Note that hereinafter, unless it is necessary to distinguish between exhibitors UT-1 to UT-n, they will be collectively referred to as exhibitors UT.
  • exhibitor terminals 3-1 to 3-n are collectively referred to as "exhibitor terminal 3.”
  • the exhibitor terminal 3 receives various information input operations by the exhibitor UT and transmits the received information to the server 1, or receives and displays various information transmitted from the server 1, for example.
  • Each of the visitor terminals 4-1 to 4-n is an information processing device operated by each of the visitors UR-1 to UR-m.
  • the visitor terminal 4 is composed of a personal computer, a tablet, a smartphone, etc. Note that hereinafter, if there is no need to distinguish each of the visitors UR-1 to UR-m, they will be collectively referred to as the visitor UR.
  • visitor terminals 4-1 to 4-m are collectively referred to as "visitor terminal 4."
  • the visitor terminal 4 receives various information input operations by the visitor UR, for example, and transmits the received information to the server 1, or receives and displays various information transmitted from the server 1.
  • FIG. 3 is a block diagram showing an example of the hardware configuration of a server in the information processing system shown in FIG. 2.
  • the server 1 includes a CPU (Central Processing Unit) 11, a ROM (Read Only Memory) 12, a RAM (Random Access Memory) 13, a bus 14, an input/output interface 15, an input unit 16, and an output unit. Part 17 and , a storage section 18, a communication section 19, and a drive 20.
  • CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the CPU 11 executes various processes according to programs recorded in the ROM 12 or programs loaded into the RAM 13 from the storage section 18 .
  • the RAM 13 also appropriately stores data necessary for the CPU 11 to execute various processes.
  • the CPU 11, ROM 12, and RAM 13 are interconnected via a bus 14.
  • An input/output interface 15 is also connected to this bus 14 .
  • An input section 16 , an output section 17 , a storage section 18 , a communication section 19 , and a drive 20 are connected to the input/output interface 15 .
  • the input unit 16 includes, for example, a keyboard, and inputs various information.
  • the output unit 17 includes a display such as a liquid crystal display, a speaker, and the like, and outputs various information as images and sounds.
  • the storage unit 18 is composed of a DRAM (Dynamic Random Access Memory) or the like, and stores various data.
  • the communication unit 19 communicates with other devices (for example, the organizer terminal 2, exhibitor terminals 3-1 to 3-n, and visitor terminals 4-1 to 4-m in FIG. 2) via a network NW including the Internet. Communicate between.
  • a removable medium 31 made of a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is appropriately installed in the drive 20.
  • the program read from the removable medium 31 by the drive 20 is installed in the storage unit 18 as necessary. Further, the removable medium 31 can also store various data stored in the storage section 18 in the same manner as the storage section 18 .
  • the organizer terminal 2, exhibitor terminal 3, and visitor terminal 4 in FIG. 2 can also have basically the same hardware configuration as the hardware configuration shown in FIG. 3. Therefore, a description of the hardware configurations of the organizer terminal 2, exhibitor terminal 3, and visitor terminal 4 will be omitted.
  • FIG. 4 is a functional block diagram showing an example of the functional configuration of the server in FIG. 3 that constitutes the information processing system in FIG. 2.
  • the CPU 11 of the server 1 includes an exhibition hall installation section 51, an exhibitor avatar generation section 52, an avatar movement section 53, a virtual space arrangement section 54, an avatar viewpoint image generation section 55, A display control section 56, a line acquisition section 57, a line output section 58, a chat control section 59, a locus collection section 60, and an exhibitor side control section 61 function.
  • the exhibition hall setting unit 51 sets the exhibition hall TH in the three-dimensional virtual space based on the organizer US's operation on the organizer terminal 2.
  • the exhibitor avatar generation unit 52 generates an exhibitor avatar AS for the booth B in the exhibition hall TH from among the instructions given by the organizer US's operation on the organizer terminal 2 or the exhibitor UT's operation on the exhibitor terminal 3. Based on the instruction, an avatar composed of two-dimensional information is generated as an exhibitor avatar AS in the exhibition hall TH.
  • the exhibitor avatar generation unit 52 generates a plurality of pre-prepared exhibitor avatars from among the instructions issued by the organizer US's operation on the organizer terminal 2 or the exhibitor UT's operation on the exhibitor terminal 3. It is also possible to generate an avatar composed of two-dimensional information based on an instruction to select an arbitrary one from the user avatars AS. In addition, as necessary, the exhibitor avatar generation unit 52 generates an image based on an instruction to transmit and select a photographic image of a real exhibitor or a predetermined model, among instructions performed by operating the exhibitor UT. It is also possible to generate an avatar made up of two-dimensional information.
  • FIG. 6 is a diagram showing an example of an image showing a screen for performing various settings regarding the booth including an avatar, among images displayed on the exhibitor terminal under the control of the server having the functional configuration of FIG. 5.
  • the screen for making various settings related to the booth, including the avatar includes "3D booth exterior,”"3D booth interior,”"2Dbooth,” and "booth basic information" as setting menus for booth B. , "personnel management” is included.
  • the screen for making various settings regarding the booth including avatars includes "visitor list” and “chat settings” as menus regarding visitors.
  • the screen for making various settings regarding the booth including the avatar includes a data "download” menu.
  • the image shown in FIG. 6 includes a plurality of live-action 2D avatars (two-dimensional photographic images) as "guide characters.”
  • the exhibitor UT can select any 2D avatar (two-dimensional photographic image) included in the "usher character" as the exhibitor avatar AS. Further, although not shown, the exhibitor UT can upload any photographic image via the exhibitor terminal 3 to use it as a guide character.
  • the 2D avatar is not limited to a photographic image. That is, for example, the exhibitor UT can also use a 2D character image (2D or 3D character CG image) prepared in advance or uploaded by the exhibitor UT as the "guide character.”
  • the function that allows the exhibitor UT to use an image (not limited to a photographic image) set by the exhibitor UT as the exhibitor avatar AS is hereinafter referred to as an "exhibitor avatar setting function.”
  • the avatar moving unit 53 moves the avatar in the virtual space based on a movement instruction to move the own avatar A to the second coordinate in the exhibition hall TH, among instructions issued by the visitor UR's operation on the visitor terminal 4. Move the current position to the second coordinate.
  • the movement instruction to move the own avatar A to the second coordinate in the exhibition hall TH also includes an instruction to change the posture of the avatar A. It is. Therefore, the avatar moving unit 53 changes the posture based on the change instruction and moves the avatar A to the above-mentioned second coordinates.
  • the virtual space placement unit 54 places the exhibition hall TH, in which the booth B is placed at the first coordinates, in the three-dimensional virtual space, and places the avatar A at the second coordinates. Further, based on the photographic image of the real exhibitor or a predetermined model generated by the exhibitor avatar generating section 52, the virtual space arrangement section 54 sets the photographic image as an exhibitor avatar AS at second coordinates. can be placed in Further, the virtual space arrangement unit 54 arranges the exhibitor avatar AS in an orientation corresponding to the posture of the visitor avatar AR. Specifically, for example, the exhibitor avatar is arranged so that the exhibitor avatar AS at the position where the exhibitor avatar AS is arranged is included in the avatar viewpoint image GA seen from the position and posture of the visitor avatar AR, facing directly. do.
  • the avatar viewpoint image generation unit 55 generates an image from a viewpoint set based on the second coordinates in the exhibition hall TH where the booth B is located at the first coordinates and in the virtual space where the avatar A is located at the second coordinates.
  • the captured three-dimensional image is generated as an avatar viewpoint image GA.
  • the exhibitor avatar AS is included in the avatar viewpoint image GA, facing directly, regardless of the position and posture of the visitor avatar AR.
  • the display control unit 56 executes control to display the avatar viewpoint image GA on the visitor terminal 4, as shown in FIG.
  • the automatic direction change function of the 2D avatar means that the live-action 2D avatar ( This is a function to place a two-dimensional photographic image) in a virtual space and generate an avatar viewpoint image GA.
  • the avatar viewpoint image GA is provided so as to directly face each of the plurality of visitors in the exhibition hall TH. That is, it can be said that the virtual space for each of the plurality of visitors is a different virtual space that is different for at least the exhibitor avatar AS.
  • the line acquisition unit 57 acquires lines to be output from the exhibitor avatar AS. Specifically, for example, the line acquisition unit 57 acquires, as a line, a character string input in the "guidance message for guide message" field of the guide character shown in FIG. 6 .
  • the dialogue output unit 58 outputs the dialogue acquired by the dialogue acquisition unit 57 as the dialogue uttered from the exhibitor avatar AS present in the booth B to the visitor terminal 4 of the visitor UR corresponding to the visitor avatar AR. Execute control to output from. Specifically, for example, as shown in FIG. 1, the dialogue output unit 58 outputs the dialogue by displaying it as a speech bubble of the exhibitor avatar AS superimposed on the avatar viewpoint image GA.
  • the dialogue output unit 58 can also perform control to output dialogue input as a character string from the visitor terminal 4 as a voice read aloud by a predetermined AI or the like.
  • the line acquisition unit 57 may acquire the line as audio data, and the line output unit 58 may perform control to output the line as audio from the visitor terminal 4.
  • the line acquisition unit 57 can acquire lines from a chatbot engine or an AI chat engine (not shown). At this time, data such as the name and time of the visitor UR who came to booth B, lead information, action history, chat history, etc. are provided to the chatbot engine or the AI chat engine. The chatbot engine or AI chat engine then generates lines based on this data. As a result, different lines are acquired by the line acquisition unit 57 for, for example, a visitor UR belonging to a listed company and a visitor UR belonging to a small to medium-sized company. As a result, the exhibitor avatar AS will speak to the exhibitor avatar AS using appropriate lines suited to the visitor's situation, so that the response from the exhibitor avatar US at booth B will be more appropriate.
  • the exhibitor UT may record the audio output from the visitor terminal 4 in the form of audio and obtain it as a line. That is, in this case, the line acquisition unit 57 acquires the voice recorded via the exhibitor terminal 3 as the line. Then, the dialogue output unit 58 causes the visitor terminal 4 to output the acquired audio dialogue.
  • the function in which the exhibitor UT allows the exhibitor avatar AS to use the photo image set by the exhibitor UT is hereinafter referred to as the "exhibitor avatar line function.”
  • the visitor UR (its avatar A) can chat with the person in charge by going to the booth B where he/she is present.
  • a chat control section 59 shown in FIG. 5 is provided in the CPU 11 of the server 1. That is, the chat control unit 59 controls the visitor UR operated by the visitor UR in order to chat between the visitor UR corresponding to the avatar A existing in the booth B and the person in charge present at the booth B. It controls communication between the exhibitor terminal 4 and the exhibitor terminal 3 or other terminal operated by the person in charge.
  • chat function the function of chatting between the visitor UR corresponding to avatar A present in booth B and the person in charge present at booth B will be referred to as a "chat function".
  • a plurality of exhibitor representatives can be assigned to the exhibitor avatar AS.
  • the person in charge of the assigned exhibitor UT can respond via text chat, voice chat, video chat, etc. on the management screen. That is, the exhibitor avatar AS is associated with the exhibitor person on the management screen.
  • the visitor UR can have a voice chat, text chat, video chat, etc. with the person (the person in charge of the exhibitor UT) associated with the exhibitor avatar AS he/she talked to. Note that only one person may be linked.
  • the trajectory collection unit 60 acquires trajectory data indicating the locus of movement within the exhibition hall TH of each avatar A operated by each of the visitors UR-1 to UR-m in FIG.
  • the locus data of each of UR-m to UR-m is aggregated and a predetermined analysis is performed.
  • the exhibitor-side control unit 61 measures the stay time of the visitor avatar AR staying at the booth B based on the trajectory data, and when the stay time exceeds a predetermined value, sends a predetermined notification to the exhibitor terminal 3. At least one of the control and the control for causing the exhibitor avatar AS to output a predetermined output at the visitor terminal 4 is executed.
  • the exhibitor avatar generation unit 52 can cause the exhibitor terminal 3 of the exhibitor UT to present the UI shown in FIG. 7, and can urge the exhibitor UT to take a response to the visitor UR who has stayed longer than a predetermined time.
  • FIG. 7 is a diagram showing an example of an image showing a screen for providing visitor information to exhibitors, among images displayed on an exhibitor terminal under the control of the server having the functional configuration of FIG. 5.
  • the "visitor list" menu of FIG. 6 is selected.
  • the name and time of the visitor UR who came to booth B, lead information, action history, chat history, etc. are displayed.
  • the visitor avatar AR stays at booth B for a predetermined time it is considered that the visitor UR is interested in the contents of booth B. Therefore, in this service, when the visitor avatar AR stays at booth B for a predetermined time, the visitor UR is prompted to respond.
  • the function of prompting (guiding) the exhibitor UT to respond to the visitor UR who has come to the booth B will be referred to as the "response guidance function by the exhibitor.”
  • the exhibitor side control unit 61 may speak a predetermined message to the visitor UR who has stayed at the booth B for a predetermined time via the exhibitor avatar AS. That is, for example, a message such as "Is there anything you are concerned about?" is provided to the visitor UR. This starts a chat with the visitor UR, and the exhibitor UT is prompted to take action based on the response.
  • the virtual space arrangement unit 54 in response to a predetermined trigger for the purpose of attending, places the exhibitor avatar at a position different from the position in the booth B and where the exhibitor avatar AS is included as a subject in the avatar viewpoint image GA. can be placed.
  • the virtual space placement unit 54 may place the exhibitor avatar AS in the avatar viewpoint image GA. can.
  • a predetermined line is output from the arranged exhibitor avatar AS.
  • FIG. 8 is a diagram showing an example of an image showing a screen including an attendant for a visitor among images displayed on an exhibitor terminal under the control of the server having the functional configuration of FIG. 5. That is, in FIG. 8, the visitor avatar AR is located in a passageway other than booth B. In the avatar viewpoint image GA, an exhibitor avatar AS1 other than the exhibitor avatar AS of the booth B is arranged. Then, exhibitor avatar AS1 utters the following line in the form of a speech bubble: ⁇ Next time, please go to the XXX booth in the back right.There will be an interesting MA.''
  • the avatar viewpoint image GA is displayed including the exhibitor avatar AS1 who will attend the visitor at the timing when the visitor should be attended to. Then, in the case of automatic attendance, the booth B introduced by the exhibitor avatar AS1 will introduce the booth B selected by AI or the like or attend the next booth that is considered to be of interest.
  • the visitor UR can also hide the attending exhibitor avatar AS1 by performing a predetermined operation. The function of attending to the visitor UR in this way is hereinafter referred to as the "attending function using a 2D avatar.”
  • the system configuration shown in FIG. 3 and the hardware configuration of the server 1 shown in FIG. 4 are merely examples for achieving the object of the present invention, and are not particularly limited.
  • the functional block diagram shown in FIG. 5 is merely an example and is not particularly limited. In other words, it is sufficient that the information processing system shown in FIG. 3 is equipped with a function that can execute the above-mentioned processing as a whole, and what kind of functional blocks and databases are used to realize this function is determined in particular by the example shown in FIG. but not limited to.
  • the locations of the functional blocks and the database are not limited to those shown in FIG. 5, and may be arbitrary. In the example of FIG. 5, all processing is performed under the control of the CPU 11 of the server 1 of FIG. 5 that constitutes the information processing system of FIG. 5, but the configuration is not limited thereto.
  • at least part of the functional blocks and database located on the server 1 side may be provided on the organizer terminal 2 side, the exhibitor terminal 3 side, the visitor terminal 4 side, or another information processing device (not shown). good.
  • one functional block may be configured by a single piece of hardware, a single piece of software, or a combination thereof.
  • a program constituting the software is installed on a computer or the like from a network or a recording medium.
  • the computer may be a computer built into dedicated hardware. Further, the computer may be a computer that can execute various functions by installing various programs, such as a server, a general-purpose smartphone, or a personal computer.
  • Recording media containing such programs not only consist of removable media (not shown) that is distributed separately from the main body of the device in order to provide the program to the user, but also are provided to the user in a state that is pre-installed in the main body of the device. Consists of provided recording media, etc.
  • the step of writing a program to be recorded on a recording medium is not only a process that is performed chronologically in accordance with the order, but also a process that is not necessarily performed chronologically but in parallel or individually. It also includes the processing to be executed.
  • an information processing device to which the present invention is applied only needs to have the following configuration, and can take various embodiments.
  • the information processing device for example, the server 1 in FIG. 3) to which the present invention is applied,
  • the exhibition hall (exhibition hall TH) where the booth (booth B in Figure 1) is located, the exhibitor avatar (exhibitor avatar AS in Figure 1) existing as an exhibitor in the booth, and the visitor inside the exhibition hall.
  • arrangement means for respectively arranging visitor avatars (visitor avatar AR in FIG. 1) that are movable as participants in a three-dimensional virtual space; Based on the first instruction to generate the exhibitor avatar among the instructions performed by the first user (exhibitor UT) on the first device (exhibitor terminal 3 in FIG.
  • the exhibitor avatar is created by: exhibitor avatar generation means for generating an avatar made up of two-dimensional information; Based on the second instruction to move the avatar to a predetermined position in the exhibition hall, among the instructions performed by the second user (visitor UR) on the second device (visitor terminal 4 in FIG. 2), visitor avatar moving means for moving the avatar to the predetermined position in the virtual space;
  • the coordinates are image generation means for generating a three-dimensional image captured from a viewpoint set based on the visitor viewpoint image (for example, the avatar viewpoint image GA in FIG.
  • a line acquisition means for acquiring lines to be output from the exhibitor avatar; a line output unit that executes control to output the acquired line from the first device of the first user corresponding to the visitor avatar present in the booth; It is also possible to further include the following. As a result, the above-mentioned "exhibitor avatar's dialogue output function" is realized.
  • the second instruction includes an instruction to change the posture of the avatar
  • the avatar moving means changes the posture based on the change instruction and moves the avatar to the second coordinates
  • the arrangement means may arrange the exhibitor avatar at the predetermined position in an orientation corresponding to the posture of the visitor avatar present in the booth.
  • the first instruction includes a prepared image prepared in advance by the first user
  • the exhibitor avatar generation means may also generate the exhibitor avatar based on the prepared image.
  • the second user operates the second user to chat between the second user corresponding to the visitor avatar present at the booth and the person in charge associated with the exhibitor avatar. It is possible to further include chat control means for controlling communication between the device and the first device or another device operated by the person in charge. As a result, the above-mentioned "exhibitor-side person-in-charge assignment chat function" is realized.
  • trajectory collection means for collecting trajectory data indicating a movement trajectory of the visitor avatar corresponding to the second user in the exhibition hall; control for measuring the stay time of the visitor avatar staying at the booth based on the trajectory data, and when the stay time exceeds a predetermined value, notifying the first device of a predetermined value; exhibitor-side control means for executing at least one of the following: controlling the device to cause the exhibitor avatar to output a predetermined output; It is possible to further include the following. As a result, the above-mentioned "function for guiding exhibitors to respond" is realized.
  • the arrangement means in response to a predetermined trigger for the purpose of attending, places the exhibitor avatar at a position in the booth different from the fixed position and where the exhibitor avatar is included as a subject in the visitor viewpoint image. It is also possible to have it placed. Thereby, the above-mentioned "attend function using 2D avatar" is realized.

Abstract

The present invention addresses the problem of improving convenience in a system that utilizes an avatar viewpoint image within a virtual space. A virtual space arrangement unit 51 arranges, in a three-dimensional virtual space, an exhibition hall in which a booth is arranged, and an avatar that can move within the exhibition hall. An exhibitor avatar generation unit 52 generates an exhibitor avatar as an avatar that is configured using two-dimensional information, such generation being on the basis of an instruction for generating an exhibitor avatar, from among instructions issued through operations by an exhibitor on an exhibitor terminal 3. An avatar movement unit 53 moves the position of an avatar to second coordinates on the basis of an instruction from a visitor to move the avatar. An avatar viewpoint image generation unit 55 generates, as an avatar viewpoint image, a three-dimensional image captured from a viewpoint set on the basis of the second coordinates in the virtual space in which the exhibition hall, in which the booth is arranged at first coordinates, and the avatar at the second coordinates are arranged. This configuration solves the abovementioned problem.

Description

情報処理装置、情報処理方法、及びプログラムInformation processing device, information processing method, and program
 本発明は、情報処理装置、情報処理方法、及びプログラムに関する。 The present invention relates to an information processing device, an information processing method, and a program.
 従来、3次元の仮想空間内における様々な視点から撮像された画像を、HMD(Head Mounted Display)等を用いて閲覧するVR(Virtual Reality)に関する技術が存在する(特許文献1参照)。 Conventionally, there is a technology related to VR (Virtual Reality) in which images captured from various viewpoints in a three-dimensional virtual space are viewed using an HMD (Head Mounted Display) or the like (see Patent Document 1).
特開2017-012397号公報Japanese Patent Application Publication No. 2017-012397
 しかしながら、特許文献1に記載の技術を含む従来技術では、単に所定の3次元の仮想空間におけるある位置(座標)の視点から撮像された画像、例えば、当該仮想空間に存在するアバタに基づいて設定された視点から撮像された画像(以下、「アバタ視点画像」と呼ぶ)を生成してユーザに閲覧させるに過ぎなかった。ユーザは、アバタ視点画像を閲覧しても、仮想空間に存在するオブジェクト(他のユーザのアバタや壁等)が含まれることはわかるが、いずれのオブジェクトにアクション(例えば、話しかける行為)をすべきかの把握は困難であった。
 即ち、特許文献1に記載の技術を含む従来技術では、仮想空間内についてのアバタ視点画像を用いるシステムにおける利便性が十分でない状況であった。
However, in the conventional technology including the technology described in Patent Document 1, the settings are simply based on an image captured from a viewpoint of a certain position (coordinates) in a predetermined three-dimensional virtual space, for example, an avatar existing in the virtual space. In the past, only an image captured from a given viewpoint (hereinafter referred to as an "avatar viewpoint image") was generated and viewed by the user. When viewing an avatar perspective image, the user knows that it includes objects that exist in the virtual space (other users' avatars, walls, etc.), but it is difficult to know which object to take an action on (for example, talking to). It was difficult to understand.
That is, in the conventional technology including the technology described in Patent Document 1, the convenience in a system using an avatar viewpoint image in a virtual space is not sufficient.
 本発明は、このような状況に鑑みてなされたものであり、仮想空間内についてのアバタ視点画像を用いるシステムにおける利便性を向上させることを目的とする。 The present invention has been made in view of this situation, and aims to improve the convenience of a system that uses avatar viewpoint images in a virtual space.
 上記目的を達成するため、本発明の一態様の情報処理装置は、
 ブースが配置された展示ホール、前記ブースの中に出展者として存在する出展者アバタ、及び前記展示ホール内を来場者として移動可能な来場者アバタを、3次元の仮想空間に夫々配置させる配置手段と、
 第1デバイスに対する第1ユーザの操作により行われる指示のうち、前記出展者アバタを生成する第1指示に基づいて、前記出展者アバタを、2次元情報で構成されるアバタとして生成する出展者アバタ生成手段と、
 第2デバイスに対する第2ユーザの操作により行われる指示のうち、前記展示ホール内で前記アバタを所定位置に移動させる第2指示に基づいて、前記仮想空間における前記アバタを当該所定位置に移動させる来場者アバタ移動手段と、
 2次元情報から構成される前記出展者アバタが定位置に存在する前記ブースが配置された前記展示ホール、及び、前記所定位置の前記来場者アバタが夫々配置された前記仮想空間において、前記座標に基づいて設定される視点から撮像した3次元画像を、来場者視点画像として生成する画像生成手段と、
 前記来場者視点画像を前記第2デバイスに表示させる制御を実行する表示制御手段と、
 を備える。
To achieve the above object, an information processing device according to one embodiment of the present invention includes:
Arrangement means for arranging, in a three-dimensional virtual space, an exhibition hall in which a booth is arranged, an exhibitor avatar existing as an exhibitor in the booth, and a visitor avatar movable as a visitor within the exhibition hall. and,
An exhibitor avatar that generates the exhibitor avatar as an avatar composed of two-dimensional information based on a first instruction to generate the exhibitor avatar among instructions performed by a first user's operation on a first device. a generating means;
A visitor who moves the avatar to a predetermined position in the virtual space based on a second instruction to move the avatar to a predetermined position in the exhibition hall among instructions performed by a second user's operation on a second device. person avatar transportation means,
In the exhibition hall where the booth is located where the exhibitor avatar composed of two-dimensional information is located at a fixed position, and in the virtual space where the visitor avatar at the predetermined position is located, the coordinates are image generation means for generating a three-dimensional image captured from a viewpoint set based on the image as a visitor viewpoint image;
a display control unit that executes control to display the visitor viewpoint image on the second device;
Equipped with
 本発明の一態様の情報処理方法及びプログラムは、本発明の一態様の情報処理装置に対応する方法及びプログラムである。 An information processing method and program according to one embodiment of the present invention are methods and programs corresponding to an information processing apparatus according to one embodiment of the present invention.
 本発明によれば、仮想空間内についてのアバタ視点画像を用いるシステムにおける利便性を向上させることができる。 According to the present invention, it is possible to improve the convenience of a system that uses avatar viewpoint images in a virtual space.
本発明の情報処理装置の一実施形態に係るサーバを用いて提供されるサービスにおいて表示される画面の一例を示す図である。FIG. 2 is a diagram showing an example of a screen displayed in a service provided using a server according to an embodiment of an information processing device of the present invention. 本発明の情報処理装置の一実施形態に係るサーバを用いて提供されるサービスにおいて表示される画面の一例であって、図1と異なる例を示す図である。FIG. 2 is a diagram illustrating an example of a screen displayed in a service provided using a server according to an embodiment of the information processing device of the present invention, which is different from FIG. 1; 本発明の情報処理装置の一実施形態に係るサーバが適用される情報処理システムの構成の一例を示す図である。1 is a diagram illustrating an example of the configuration of an information processing system to which a server according to an embodiment of an information processing apparatus of the present invention is applied. 図3に示す情報処理システムのうちサーバのハードウェア構成の一例を示すブロック図である。4 is a block diagram showing an example of the hardware configuration of a server in the information processing system shown in FIG. 3. FIG. 図2の情報処理システムを構成する図4のサーバの機能的構成の一例を示す機能ブロック図である。FIG. 5 is a functional block diagram showing an example of the functional configuration of the server in FIG. 4 that constitutes the information processing system in FIG. 2; 図5の機能的構成のサーバの制御により出展者端末に表示される画像のうち、アバタを含むブースに関する各種設定を行う画面を示す画像の一例を示す図である。6 is a diagram illustrating an example of an image showing a screen for performing various settings regarding a booth including an avatar, among images displayed on an exhibitor terminal under the control of the server having the functional configuration of FIG. 5. FIG. 図5の機能的構成のサーバの制御により出展者端末に表示される画像のうち、出展者に対して来場者の情報を提供する画面を示す画像の一例を示す図である。6 is a diagram illustrating an example of an image showing a screen for providing visitor information to an exhibitor, among images displayed on an exhibitor terminal under the control of the server having the functional configuration of FIG. 5. FIG. 図5の機能的構成のサーバの制御により出展者端末に表示される画像のうち、来場者に対するアテンドが含まれる画面を示す画像の一例を示す図である。6 is a diagram illustrating an example of an image showing a screen including an attendant for visitors among images displayed on an exhibitor terminal under the control of the server having the functional configuration of FIG. 5. FIG.
 以下、本発明の実施形態について図面を用いて説明する。
 図1は、本発明の情報処理装置の一実施形態に係るサーバを用いて提供されるサービスにおいて表示される画面の一例を示す図である。
Embodiments of the present invention will be described below with reference to the drawings.
FIG. 1 is a diagram showing an example of a screen displayed in a service provided using a server according to an embodiment of an information processing apparatus of the present invention.
 まず、本発明の実施形態を説明するに先立ち、図1を用いて、本発明の情報処理装置の一実施形態に係るサーバを用いて提供されるサービス(以下、「本サービス」と呼ぶ)について簡単に説明する。
 本サービスとは、次のようなサービスである。
 即ち、本サービスは、3次元の仮想空間内において開催される展示会において、主催者(例えば図2の主催者US)、出展者(例えば図2の出展者UT-1乃至UT-n)、及び来場者(例えば図2の来場者UR-1乃至UR-m)に提供されるサービスである。
First, before explaining the embodiment of the present invention, using FIG. 1, a service provided using a server according to an embodiment of the information processing apparatus of the present invention (hereinafter referred to as "this service") will be explained. Explain briefly.
This service is the following service.
In other words, this service provides support to organizers (for example, organizer US in Figure 2), exhibitors (for example, exhibitors UT-1 to UT-n in Figure 2), at an exhibition held in a three-dimensional virtual space. and services provided to visitors (for example, visitors UR-1 to UR-m in FIG. 2).
 具体的には例えば、図1に示すように、3次元の仮想空間には、主催者により設定される展示ホールTHが配置される。展示ホールTH内には、複数の出展者の夫々の商品等(コンテンツ)が出展される複数のブースBが配置される。
 来場者は、端末(例えば図2の来場者端末4-1乃至4-m)を操作して、展示ホールTH内を、アバタAとして移動し、興味あるブースBを訪れる。
 図1は、来場者の端末に表示される、アバタAに基づく視点から3次元の仮想空間が撮像された結果得られる3次元画像GA(以下、「アバタ視点画像GA」と呼ぶ)の一例が示されている。
Specifically, for example, as shown in FIG. 1, an exhibition hall TH set by the organizer is arranged in the three-dimensional virtual space. In the exhibition hall TH, a plurality of booths B are arranged where products, etc. (content) of a plurality of exhibitors are exhibited.
A visitor operates a terminal (for example, visitor terminals 4-1 to 4-m in FIG. 2) to move within the exhibition hall TH as an avatar A and visit a booth B of interest.
Figure 1 shows an example of a three-dimensional image GA (hereinafter referred to as "avatar perspective image GA") obtained as a result of capturing a three-dimensional virtual space from a viewpoint based on avatar A, displayed on a visitor's terminal. It is shown.
 そして、来場者は、端末を操作して、展示ホールTH内に配置されたブースBを訪れて、ブースBの展示物を閲覧したり、出展者からの説明を受けたりする。 Then, the visitor operates the terminal to visit booth B located in exhibition hall TH, view the exhibits in booth B, and receive explanations from exhibitors.
 ここで、3次元の仮想空間内に配置されるアバタは、3Dアバタであることが通常である。
 本サービスにおいても、来場者のアバタAR(以下、「来場者アバタAR」と呼ぶ)は、3Dアバタである。即ち、来場者アバタARは、3次元のポリゴンからなるアバタである。
 しかしながら、出展者のアバタAS(以下、「出展者アバタAS」と呼ぶ)は、実写型2Dアバタとなっている。即ち、出展者アバタASには、2次元の写真画像が採用されている。
 なお、詳しくは後述するが、図1において、出展者アバタASには、「3Dのオンラインイベントサービスを提供する〇〇〇のブースです」というセリフが吹き出しの形態で表示されている。即ち、ブースBに立ち寄ろうとしている来場者には、出展者アバタASからのセリフが提供される。
 以下に、図1に示すようにアバタ視点画像GAにおける来場者アバタARと出展者アバタASについてより詳しく説明する。
Here, the avatar placed in the three-dimensional virtual space is usually a 3D avatar.
In this service as well, the visitor's avatar AR (hereinafter referred to as "visitor avatar AR") is a 3D avatar. That is, the visitor avatar AR is an avatar made up of three-dimensional polygons.
However, the exhibitor's avatar AS (hereinafter referred to as "exhibitor avatar AS") is a live-action 2D avatar. That is, the exhibitor avatar AS employs a two-dimensional photographic image.
Although details will be described later, in FIG. 1, the exhibitor's avatar AS displays a message in the form of a speech bubble that reads, ``This is the booth of 〇〇, which provides 3D online event services.'' That is, a visitor who is about to stop by booth B is provided with a line from the exhibitor's avatar AS.
Below, as shown in FIG. 1, the visitor avatar AR and exhibitor avatar AS in the avatar viewpoint image GA will be explained in more detail.
 出展者アバタASは、2次元の画像のため高画質にアバタ視点画像GAに含まれることになる。これにより、来場者は、出展者アバタASをより現実に即した実写型2Dアバタ(写真の画像)により把握することができるため、親近感を得ることができる。その結果、来場者は出展者に話かけやすくなる。 Since the exhibitor avatar AS is a two-dimensional image, it will be included in the avatar viewpoint image GA with high image quality. As a result, the visitors can understand the exhibitor avatar AS through a live-action 2D avatar (photo image) that is more realistic, and therefore can feel a sense of familiarity. As a result, it becomes easier for visitors to talk to exhibitors.
 また、アバタ視点画像GA上において実写型2Dアバタと同様に高画質となり得る、多量のポリゴン数からなる3Dアバタも存在する。しかしながら、そのような3Dアバタを出展社アバタAS及び来場者アバタARの双方に採用することは、パフォーマンス上の負荷が極めて高い。
 そこで、本サービスでは、出展者アバタASとして実写型2Dアバタを採用している。これにより、パフォーマンス上の負荷を削減すると共に、上述の来場者が出展者に話しかけやすくなるという効果を奏することができるのである。
Furthermore, there is also a 3D avatar that has a large number of polygons and can have high image quality on the avatar viewpoint image GA, similar to the live-action 2D avatar. However, employing such a 3D avatar for both the exhibitor avatar AS and the visitor avatar AR imposes an extremely high performance burden.
Therefore, in this service, a live-action 2D avatar is used as the exhibitor avatar AS. This has the effect of reducing the performance burden and making it easier for the above-mentioned visitors to talk to exhibitors.
 また、本サービスでは、来場者アバタARとして図1に示すように、荒い3次元のポリゴンからなるアバタを採用している。これにより、パフォーマンス上の負荷を削減しているのである。その結果、パフォーマンス上の負荷を削減すると共に、展示ホールTH内に多数のアバタの設置が可能となるのである。 Additionally, this service uses an avatar made of rough three-dimensional polygons as shown in Figure 1 as the visitor avatar AR. This reduces the performance burden. As a result, it is possible to reduce the performance load and to install a large number of avatars within the exhibition hall TH.
 更に言えば、来場者アバタARは、3次元のポリゴンからなるアバタであり、出展者アバタASと性質が異なる。このため、アバタ視点画像GAにおいて、出展者アバタASが目立つため、来場者にとって話しかける相手となり得る出展者の把握が容易となる。
 以下、来場者アバタARと出展者アバタASとの性質の違いとして、アバタ視点画像GA内におけるアバタの見え方の違いについて説明する。
Furthermore, the visitor avatar AR is an avatar made up of three-dimensional polygons, and has different properties from the exhibitor avatar AS. For this reason, the exhibitor avatar AS stands out in the avatar viewpoint image GA, making it easier for visitors to grasp the exhibitor with whom they can talk.
Hereinafter, as a difference in characteristics between the visitor avatar AR and the exhibitor avatar AS, a difference in how the avatars appear in the avatar viewpoint image GA will be explained.
 来場者アバタARから他の来場者アバタARを見ている場合を考える。
 この場合、他の来場者アバタは3Dアバタである。そのため、上述したように、他の来場者アバタは、比較的荒い3次元のポリゴンとしてアバタ視点画像GAに含まれる。
 また、他の来場者アバタARと対面している状態から、対面した他の来場者アバタARを見ながら、その、他の来場者アバタARの横に回るような動作をしたとき、他の来場者アバタARを斜めから見た様子や側面の様子がアバタ視点画像GAに反映される。
Consider a case where another visitor's avatar AR is viewed from the visitor's avatar AR.
In this case, the other visitor avatars are 3D avatars. Therefore, as described above, other visitor avatars are included in the avatar viewpoint image GA as relatively rough three-dimensional polygons.
In addition, when you are facing another visitor's avatar AR and move to the side of the other visitor's avatar AR while looking at the other visitor's avatar AR, The appearance of the person's avatar AR viewed from an angle or the side is reflected in the avatar viewpoint image GA.
 次に、来場者アバタARから出展者アバタASを見ている場合を考える。
 この場合、出展者アバタASは、実写型2Dアバタ(2次元の写真画像)である。そのため、上述したように、出展者アバタASは、高画質な写真としてアバタ視点画像GAに含まれる。
 また、実写型2Dアバタ(2次元の写真画像)は、3次元の仮想空間内において厚みを持たない。そのため、3次元の仮想空間において出展者アバタが固定されているとすれば、来場者アバタARが、出展者アバタASと対面した状態からその出展者アバタASの横に回るように移動したとき、出展者アバタASにパースがついて見えたり厚みが薄くなったり、側面からは厚みがなく見えなくなったりする。
Next, consider a case where the exhibitor avatar AS is viewed from the visitor avatar AR.
In this case, the exhibitor avatar AS is a live-action 2D avatar (a two-dimensional photographic image). Therefore, as described above, the exhibitor avatar AS is included in the avatar viewpoint image GA as a high-quality photograph.
Furthermore, a live-action 2D avatar (a two-dimensional photographic image) has no thickness in a three-dimensional virtual space. Therefore, if the exhibitor avatar is fixed in the three-dimensional virtual space, when the visitor avatar AR moves from facing the exhibitor avatar AS to the side of the exhibitor avatar AS, The exhibitor's avatar AS appears to have perspective, becomes thinner, or becomes thinner and invisible from the side.
 このように、来場者アバタARが移動した場合、3Dアバタである他の来場者アバタARと、実写型2Dアバタである出展者アバタASでは、見え方が異なるのである。
 これにより、来場者は、アバタ視点画像GAにおいて来場者アバタARと出展者アバタASとを判別することが容易となるのである。
In this way, when the visitor avatar AR moves, the way the visitor avatar AR, which is a 3D avatar, and the exhibitor avatar AS, which is a live-action 2D avatar, look different.
Thereby, the visitor can easily distinguish between the visitor avatar AR and the exhibitor avatar AS in the avatar viewpoint image GA.
 本サービスは、上述の性質を前提として、より来場者にとって好適なアバタ画像を得るため、「2Dアバタの自動方向変更機能」を有する。以下、「2Dアバタの自動方向変更機能」について、図2を更に用いて説明する。 Based on the above-mentioned characteristics, this service has a "2D avatar automatic direction change function" in order to obtain an avatar image more suitable for visitors. The "2D avatar automatic direction change function" will be described below with further reference to FIG. 2.
 図2は、本発明の情報処理装置の一実施形態に係るサーバを用いて提供されるサービスにおいて表示される画面の一例であって、図1と異なる例を示す図である。 FIG. 2 is an example of a screen displayed in a service provided using a server according to an embodiment of the information processing device of the present invention, and is a diagram showing an example different from FIG. 1.
 図2の来場者アバタARは、図1と比較してブースBや出展者アバタAS等を中心として右側に移動(展示ホールTHを上から見て反時計回りに移動)している。即ち、図1及び図2に示すアバタ視点画像GAの両方において、出展者アバタASは、来場者アバタと正対している。
 即ち、本サービスでは、2Dアバタの自動方向変更機能により、出展者アバタASは、来場者アバタARの位置に対して正対するように、自動で方向が変更される。
 これにより、出展者アバタASにパースがついて見えたり厚みが薄くなったり、側面からは厚みがなく見えなくなったりしないため、アバタ視点画像GA内における出展者アバタASの位置の把握が容易となる。
The visitor avatar AR in FIG. 2 has moved to the right (moved counterclockwise when looking at the exhibition hall TH from above) with booth B, exhibitor avatar AS, etc. in the center compared to FIG. 1. That is, in both the avatar viewpoint images GA shown in FIGS. 1 and 2, the exhibitor avatar AS is directly facing the visitor avatar.
That is, in this service, the direction of the exhibitor avatar AS is automatically changed by the automatic direction change function of the 2D avatar so that it is directly facing the position of the visitor avatar AR.
As a result, the exhibitor avatar AS does not appear perspective or thin, or becomes thin and invisible from the side, making it easy to grasp the position of the exhibitor avatar AS in the avatar viewpoint image GA.
 更に言えば、上述したように、来場者アバタARが移動した場合、他の来場者アバタARを斜めから見た様子や側面の様子としてアバタ視点画像GAに反映される。これに対して、来場者アバタARが移動した場合、アバタ視点画像GAにおいて、出展者アバタASは、常に正対した様子としてアバタ視点画像GAに反映される。
 これにより、更に、アバタ視点画像GA内における出展者アバタASの位置の把握が容易となる。
Furthermore, as described above, when the visitor avatar AR moves, it is reflected in the avatar viewpoint image GA as an oblique view or side view of another visitor avatar AR. On the other hand, when the visitor avatar AR moves, the exhibitor avatar AS is reflected in the avatar viewpoint image GA as always facing directly.
This further facilitates understanding of the position of the exhibitor avatar AS within the avatar viewpoint image GA.
 さらに以下、本サービスの基本的な流れについて説明する。 Furthermore, the basic flow of this service will be explained below.
 主催者、出展者、及び参加者の夫々は、各々の端末を用いて、ブラウザ機能等によりインターネット等を介して、本サービスの提供者(以下、「サービス提供者」と呼ぶ)により管理されるサーバ(例えば後述の図2のサーバ1)に接続することで、本サービスを利用することができる。即ち、本サービスは、いわゆるクラウドサービスとして実現されている。
 ここで、ブラウザ機能の代わりに、端末に予めインストールされた専用のアプリケーションソフトウェア(以下、「専用アプリ」と呼ぶ)が利用されることにより、本サービスを利用できるようにしてもよい。
Each of the organizers, exhibitors, and participants uses their respective terminals to manage the services provided by the provider of this service (hereinafter referred to as the "service provider") via the Internet using browser functions, etc. This service can be used by connecting to a server (for example, server 1 in FIG. 2, which will be described later). That is, this service is realized as a so-called cloud service.
Here, instead of the browser function, this service may be made available by using dedicated application software (hereinafter referred to as a "dedicated application") installed in advance on the terminal.
 まず、主催者は、端末を操作して、展示ホールTHの形状とブースBの配置図を決定する。
 そのうえで、主催者は、複数の出展者を誘致し、複数の出展者と交渉し、ブース配置を決める。
 主催者は、端末を操作して管理画面を表示させ、その管理画面にて、ブース配置に基づいて、展示ホールTH内に複数の出展者の夫々のブースBを配置させる。
 出展者は、端末を操作して管理画面を表示させ、その管理画面にて、自身のブースBについて各種の設定をする。その際に出展者は、端末を操作して、ブースBにおいて説明等をする担当者のアバタ、即ち、上述の出展者アバタASを設定する。詳しくは図6を用いて後述するが、出展者は、用意された出展者アバタASから任意のアバタを選択したり、予め撮影した担当者の写真等をアップロードすることで、出展者アバタASを設定する。
 これにより、3次元の仮想空間の展示ホールTHにおいて、参加者(アバタA)が来場することが可能になる。
First, the organizer operates a terminal to determine the shape of the exhibition hall TH and the layout of the booth B.
The organizer then invites multiple exhibitors, negotiates with them, and decides on booth placement.
The organizer operates the terminal to display a management screen, and uses the management screen to arrange the booths B of the plurality of exhibitors in the exhibition hall TH based on the booth arrangement.
The exhibitor operates the terminal to display the management screen, and makes various settings for his or her booth B on the management screen. At that time, the exhibitor operates the terminal to set the avatar of the person in charge who will give explanations etc. at booth B, that is, the above-mentioned exhibitor avatar AS. The details will be described later using Figure 6, but exhibitors can use exhibitor avatar AS by selecting an arbitrary avatar from the prepared exhibitor avatar AS or by uploading a photo of the person in charge taken in advance. Set.
This allows the participant (Avatar A) to visit the exhibition hall TH in the three-dimensional virtual space.
 来場者は、端末を操作して、事前登録をした後に、展示会(イベント)の当日にログインして、展示ホールTH内をアバタAとして移動することができる。アバタAの移動の様子は、アバタ視点画像GAとして端末に表示される。
 来場者は、端末を操作して、展示ホールTH内を移動し、自身が訪れたいブースBを訪れて、展示物を閲覧したり、出展者の説明を聞くことなどができる。
After the visitor operates a terminal and pre-registers, he or she can log in on the day of the exhibition (event) and move around the exhibition hall TH as avatar A. The movement of avatar A is displayed on the terminal as an avatar viewpoint image GA.
Visitors can operate terminals to move around the exhibition hall TH, visit booths B that they wish to visit, view exhibits, listen to exhibitors' explanations, and so on.
 詳細については後述するが、来場者は、端末を操作して各ブースBを訪れる際の利便性を向上させる各種機能を利用することができる。 Although the details will be described later, visitors can use the terminals to utilize various functions that improve the convenience of visiting each booth B.
 次に、図2を参照して、上述した図1の本サービスの提供を実現させる情報処理システム、即ち本発明の情報処理装置の一実施形態に係るサーバが適用される情報処理システムの構成について説明する。
 図2は、本発明の情報処理装置の一実施形態に係るサーバが適用される情報処理システムの構成の一例を示す図である。
Next, referring to FIG. 2, the configuration of an information processing system that realizes the provision of the service shown in FIG. 1 described above, that is, an information processing system to which a server according to an embodiment of the information processing apparatus of the present invention is applied explain.
FIG. 2 is a diagram illustrating an example of the configuration of an information processing system to which a server according to an embodiment of the information processing apparatus of the present invention is applied.
 図5に示す情報処理システムは、サーバ1と、主催者端末2と、出展者端末3-1乃至3-n(nは1以上の整数値)と、来場者端末4-1乃至4-m(mはnとは独立した1以上の整数値)を含むように構成されている。
 サーバ1、主催者端末2、出展者端末3-1乃至3-n、及び来場者端末4-1乃至4-mは、インターネット等の所定のネットワークNWを介して相互に接続されている。
The information processing system shown in FIG. (m is an integer value of 1 or more independent of n).
The server 1, organizer terminal 2, exhibitor terminals 3-1 to 3-n, and visitor terminals 4-1 to 4-m are interconnected via a predetermined network NW such as the Internet.
 サーバ1は、サービス提供者により管理される情報処理装置である。サーバ1は、主催者端末2、出展者端末3-1乃至3-n、及び来場者端末4-1乃至4-mと適宜通信をしながら、本サービスを実現するための各種処理を実行する。 The server 1 is an information processing device managed by a service provider. The server 1 executes various processes to realize this service while communicating with the organizer terminal 2, exhibitor terminals 3-1 to 3-n, and visitor terminals 4-1 to 4-m as appropriate. .
 主催者端末2は、主催者USが操作する情報処理装置である。主催者端末2は、パーソナルコンピュータ、タブレット、スマートフォン等で構成される。主催者端末2は、例えば主催者USによる各種情報の入力操作を受け付けてサーバ1に送信したり、サーバ1から送信されてきた各種情報を受信して表示したりする。なお、図2には主催者端末2が1台のみ描画されているが、これは説明を理解し易くするために簡略化させたものである。実際には、複数の展示会(イベント)の数、即ち複数の主催者USに対応する主催者端末2が存在し得る。 The organizer terminal 2 is an information processing device operated by the organizer US. The organizer terminal 2 is composed of a personal computer, a tablet, a smartphone, etc. The organizer terminal 2 receives various information input operations by the organizer US and transmits them to the server 1, and receives and displays various information transmitted from the server 1, for example. Note that although only one organizer terminal 2 is depicted in FIG. 2, this is simplified to make the explanation easier to understand. In reality, there may be organizer terminals 2 corresponding to a plurality of exhibitions (events), ie, a plurality of organizers US.
 出展者端末3-1乃至3-nの夫々は、出展者UT-1乃至UT-nの夫々が操作する情報処理装置である。出展者端末3は、上述したように、パーソナルコンピュータ、タブレット、スマートフォン等で構成される。
 なお、以下、出展者UT-1乃至UT-nの個々を区別する必要がない場合、これらをまとめて出展者UTと呼ぶ。また、出展者UTと呼んでいる場合、出展者端末3-1乃至3-nをまとめて「出展者端末3」と呼ぶ。
 出展者端末3は、例えば出展者UTによる各種情報の入力操作を受け付けてサーバ1に送信したり、サーバ1から送信されてきた各種情報を受信して表示したりする。
Each of the exhibitor terminals 3-1 to 3-n is an information processing device operated by each of the exhibitors UT-1 to UT-n. As described above, the exhibitor terminal 3 is composed of a personal computer, a tablet, a smartphone, etc.
Note that hereinafter, unless it is necessary to distinguish between exhibitors UT-1 to UT-n, they will be collectively referred to as exhibitors UT. Furthermore, when referred to as exhibitor UT, exhibitor terminals 3-1 to 3-n are collectively referred to as "exhibitor terminal 3."
The exhibitor terminal 3 receives various information input operations by the exhibitor UT and transmits the received information to the server 1, or receives and displays various information transmitted from the server 1, for example.
 来場者端末4-1乃至4-nの夫々は、来場者UR-1乃至UR-mの夫々が操作する情報処理装置である。来場者端末4は、上述したように、パーソナルコンピュータ、タブレット、スマートフォン等で構成される。
 なお、以下、来場者UR-1乃至UR-mの個々を区別する必要がない場合、これらをまとめて来場者URと呼ぶ。また、来場者URと呼んでいる場合、来場者端末4-1乃至4-mをまとめて「来場者端末4」と呼ぶ。
 来場者端末4は、例えば来場者URによる各種情報の入力操作を受け付けてサーバ1に送信したり、サーバ1から送信されてきた各種情報を受信して表示したりする。
Each of the visitor terminals 4-1 to 4-n is an information processing device operated by each of the visitors UR-1 to UR-m. As described above, the visitor terminal 4 is composed of a personal computer, a tablet, a smartphone, etc.
Note that hereinafter, if there is no need to distinguish each of the visitors UR-1 to UR-m, they will be collectively referred to as the visitor UR. Furthermore, when referred to as visitor UR, visitor terminals 4-1 to 4-m are collectively referred to as "visitor terminal 4."
The visitor terminal 4 receives various information input operations by the visitor UR, for example, and transmits the received information to the server 1, or receives and displays various information transmitted from the server 1.
 図3は、図2に示す情報処理システムのうちサーバのハードウェア構成の一例を示すブロック図である。 FIG. 3 is a block diagram showing an example of the hardware configuration of a server in the information processing system shown in FIG. 2.
 サーバ1は、CPU(Central Processing Unit)11と、ROM(Read Only Memory)12と、RAM(Random Access Memory)13と、バス14と、入出力インターフェース15と、入力部16と、出力部17と、記憶部18と、通信部19と、ドライブ20とを備えている。 The server 1 includes a CPU (Central Processing Unit) 11, a ROM (Read Only Memory) 12, a RAM (Random Access Memory) 13, a bus 14, an input/output interface 15, an input unit 16, and an output unit. Part 17 and , a storage section 18, a communication section 19, and a drive 20.
 CPU11は、ROM12に記録されているプログラム、又は、記憶部18からRAM13にロードされたプログラムに従って各種の処理を実行する。
 RAM13には、CPU11が各種の処理を実行する上において必要なデータ等も適宜記憶される。
The CPU 11 executes various processes according to programs recorded in the ROM 12 or programs loaded into the RAM 13 from the storage section 18 .
The RAM 13 also appropriately stores data necessary for the CPU 11 to execute various processes.
 CPU11、ROM12、及びRAM13は、バス14を介して相互に接続されている。このバス14にはまた、入出力インターフェース15も接続されている。入出力インターフェース15には、入力部16、出力部17、記憶部18、通信部19及びドライブ20が接続されている。 The CPU 11, ROM 12, and RAM 13 are interconnected via a bus 14. An input/output interface 15 is also connected to this bus 14 . An input section 16 , an output section 17 , a storage section 18 , a communication section 19 , and a drive 20 are connected to the input/output interface 15 .
 入力部16は、例えばキーボード等により構成され、各種情報を入力する。
 出力部17は、液晶等のディスプレイやスピーカ等により構成され、各種情報を画像や音声として出力する。
 記憶部18は、DRAM(Dynamic Random Access Memory)等で構成され、各種データを記憶する。
 通信部19は、インターネットを含むネットワークNWを介して他の装置(例えば図2の主催者端末2、出展者端末3-1乃至3-n、及び来場者端末4-1乃至4-m)との間で通信を行う。
The input unit 16 includes, for example, a keyboard, and inputs various information.
The output unit 17 includes a display such as a liquid crystal display, a speaker, and the like, and outputs various information as images and sounds.
The storage unit 18 is composed of a DRAM (Dynamic Random Access Memory) or the like, and stores various data.
The communication unit 19 communicates with other devices (for example, the organizer terminal 2, exhibitor terminals 3-1 to 3-n, and visitor terminals 4-1 to 4-m in FIG. 2) via a network NW including the Internet. Communicate between.
 ドライブ20には、磁気ディスク、光ディスク、光磁気ディスク、或いは半導体メモリ等よりなる、リムーバブルメディア31が適宜装着される。ドライブ20によってリムーバブルメディア31から読み出されたプログラムは、必要に応じて記憶部18にインストールされる。
 また、リムーバブルメディア31は、記憶部18に記憶されている各種データも、記憶部18と同様に記憶することができる。
A removable medium 31 made of a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is appropriately installed in the drive 20. The program read from the removable medium 31 by the drive 20 is installed in the storage unit 18 as necessary.
Further, the removable medium 31 can also store various data stored in the storage section 18 in the same manner as the storage section 18 .
 なお、図示はしないが、図2の主催者端末2、出展者端末3、及び来場者端末4も、図3に示すハードウェア構成と基本的に同様の構成を有することができる。したがって、主催者端末2、出展者端末3、及び来場者端末4のハードウェア構成についての説明は省略する。 Although not shown, the organizer terminal 2, exhibitor terminal 3, and visitor terminal 4 in FIG. 2 can also have basically the same hardware configuration as the hardware configuration shown in FIG. 3. Therefore, a description of the hardware configurations of the organizer terminal 2, exhibitor terminal 3, and visitor terminal 4 will be omitted.
 このような図3のサーバ1の各種ハードウェアと各種ソフトウェアとの協働により各種処理の実行が可能になる。その結果、上述の本サービスを提供することができる。
 以下、図2の情報処理システムを構成する図3のサーバ1において実行される機能的構成について説明する。
 図4は、図2の情報処理システムを構成する図3のサーバの機能的構成の一例を示す機能ブロック図である。
Various types of processing can be executed through cooperation between various types of hardware and various types of software in the server 1 shown in FIG. 3. As a result, the above-mentioned service can be provided.
The functional configuration executed in the server 1 of FIG. 3 that constitutes the information processing system of FIG. 2 will be described below.
FIG. 4 is a functional block diagram showing an example of the functional configuration of the server in FIG. 3 that constitutes the information processing system in FIG. 2.
 図4に示すように、サーバ1のCPU11においては、展示ホール設置部51と、出展者アバタ生成部52と、アバタ移動部53と、仮想空間配置部54と、アバタ視点画像生成部55と、表示制御部56と、セリフ取得部57と、セリフ出力部58と、チャット制御部59と、軌跡収集部60と、出展側対応制御部61とが機能する。 As shown in FIG. 4, the CPU 11 of the server 1 includes an exhibition hall installation section 51, an exhibitor avatar generation section 52, an avatar movement section 53, a virtual space arrangement section 54, an avatar viewpoint image generation section 55, A display control section 56, a line acquisition section 57, a line output section 58, a chat control section 59, a locus collection section 60, and an exhibitor side control section 61 function.
 展示ホール設置部51は、主催者端末2に対する主催者USの操作に基づいて、3次元の仮想空間内に展示ホールTHを設定する。 The exhibition hall setting unit 51 sets the exhibition hall TH in the three-dimensional virtual space based on the organizer US's operation on the organizer terminal 2.
 出展者アバタ生成部52は、主催者端末2に対する主催者USの操作又は出展者端末3に対する出展者UTの操作により行われる指示のうち、展示ホールTH内のブースBの出展者アバタASを生成する指示に基づいて、展示ホールTH内における出展者アバタASとして2次元情報で構成されるアバタを生成する。 The exhibitor avatar generation unit 52 generates an exhibitor avatar AS for the booth B in the exhibition hall TH from among the instructions given by the organizer US's operation on the organizer terminal 2 or the exhibitor UT's operation on the exhibitor terminal 3. Based on the instruction, an avatar composed of two-dimensional information is generated as an exhibitor avatar AS in the exhibition hall TH.
 なお、必要に応じて、出展者アバタ生成部52は、主催者端末2に対する主催者USの操作又は出展者端末3に対する出展者UTの操作により行われる指示のうち、予め用意された複数の出展者アバタASから任意のものを選択する指示に基づいて、2次元情報で構成されるアバタを生成することもできる。
 また必要に応じて、出展者アバタ生成部52は、出展者UTの操作により行われる指示のうち、現実の出展者や所定のモデルが撮像された写真画像を送信して選択するする指示に基づいて、2次元情報で構成されるアバタを生成することもできる。
Note that, as necessary, the exhibitor avatar generation unit 52 generates a plurality of pre-prepared exhibitor avatars from among the instructions issued by the organizer US's operation on the organizer terminal 2 or the exhibitor UT's operation on the exhibitor terminal 3. It is also possible to generate an avatar composed of two-dimensional information based on an instruction to select an arbitrary one from the user avatars AS.
In addition, as necessary, the exhibitor avatar generation unit 52 generates an image based on an instruction to transmit and select a photographic image of a real exhibitor or a predetermined model, among instructions performed by operating the exhibitor UT. It is also possible to generate an avatar made up of two-dimensional information.
 即ち、出展者アバタ生成部52は、図6に示すUI(UserInterface)を出展者UTの出展者端末3に提示させ、出展者UTの操作により、アバタを生成することができる。
 図6は、図5の機能的構成のサーバの制御により出展者端末に表示される画像のうち、アバタを含むブースに関する各種設定を行う画面を示す画像の一例を示す図である。
 なお、図6示すように、アバタを含むブースに関する各種設定を行う画面には、ブースBに関する設定メニューとして、「3Dブース外観」、「3Dブース内観」、「2Dブース」、「ブース基本情報」、「担当者管理」が含まれている。また、図6示すように、アバタを含むブースに関する各種設定を行う画面には、来場者に関するメニューとして、「来場者一覧」、「チャットの設定」が含まれている。図6示すように、アバタを含むブースに関する各種設定を行う画面には、データの「ダウンロード」メニューが含まれている。
That is, the exhibitor avatar generation unit 52 can generate an avatar by causing the exhibitor terminal 3 of the exhibitor UT to present the UI (User Interface) shown in FIG. 6 and by the operation of the exhibitor UT.
FIG. 6 is a diagram showing an example of an image showing a screen for performing various settings regarding the booth including an avatar, among images displayed on the exhibitor terminal under the control of the server having the functional configuration of FIG. 5.
As shown in FIG. 6, the screen for making various settings related to the booth, including the avatar, includes "3D booth exterior,""3D booth interior,""2Dbooth," and "booth basic information" as setting menus for booth B. , "personnel management" is included. Further, as shown in FIG. 6, the screen for making various settings regarding the booth including avatars includes "visitor list" and "chat settings" as menus regarding visitors. As shown in FIG. 6, the screen for making various settings regarding the booth including the avatar includes a data "download" menu.
 図6に示す画像には、「誘導員キャラクター」として、複数の実写型2Dアバタ(2次元の写真画像)が含まれている。出展者UTは、「誘導員キャラクター」に含まれる複数2Dアバタ(2次元の写真画像)から、任意のものを選択することにより、それを出展者アバタASとして選択することができる。
 また、図示はしないが、出展者UTは、任意の写真画像を、出展者端末3を介してアップロード等することにより、誘導員キャラクターとすることができる。これにより、出展者UTは、任意の写真画像を出展者アバタASとして利用することができるようになる。
 即ち例えば、出展者UTは、後述するチャット機能を利用する出展者の担当者自身の写真画像や、出展者自身の指定する制服を着用した人物の写真画像等を出展者アバタASとして利用することができるようになる。
 なお、2Dアバタは、写真画像に限定されない。即ち例えば、出展者UTは、予め用意又は自身でアップロード等した2Dのキャラクター画像(2Dや3DのキャラクタCGの画像)も「誘導員キャラクター」として利用することができる。
 このように、出展者UTが、自身の設定した(写真画像に限定されない)画像を出展者アバタASとして利用させる機能を、以下、「出展者アバタ設定機能」と呼ぶ。
The image shown in FIG. 6 includes a plurality of live-action 2D avatars (two-dimensional photographic images) as "guide characters." The exhibitor UT can select any 2D avatar (two-dimensional photographic image) included in the "usher character" as the exhibitor avatar AS.
Further, although not shown, the exhibitor UT can upload any photographic image via the exhibitor terminal 3 to use it as a guide character. This allows the exhibitor UT to use any photographic image as the exhibitor avatar AS.
That is, for example, the exhibitor UT may use, as the exhibitor avatar AS, a photographic image of the exhibitor's representative who uses the chat function described later, a photographic image of a person wearing a uniform designated by the exhibitor, etc. You will be able to do this.
Note that the 2D avatar is not limited to a photographic image. That is, for example, the exhibitor UT can also use a 2D character image (2D or 3D character CG image) prepared in advance or uploaded by the exhibitor UT as the "guide character."
In this way, the function that allows the exhibitor UT to use an image (not limited to a photographic image) set by the exhibitor UT as the exhibitor avatar AS is hereinafter referred to as an "exhibitor avatar setting function."
 アバタ移動部53は、来場者端末4に対する来場者URの操作により行われる指示のうち、展示ホールTH内で自身のアバタAを第2座標に移動させる移動指示に基づいて、仮想空間におけるアバタの現在位置を第2座標に移動させる。 The avatar moving unit 53 moves the avatar in the virtual space based on a movement instruction to move the own avatar A to the second coordinate in the exhibition hall TH, among instructions issued by the visitor UR's operation on the visitor terminal 4. Move the current position to the second coordinate.
 ここで、来場者端末4に対する来場者URの操作により行われる指示のうち、展示ホールTH内で自身のアバタAを第2座標に移動させる移動指示には、アバタAの姿勢の変更指示も含まれている。
 そこで、アバタ移動部53は、変更指示に基づく姿勢に変更してアバタAを上述の第2座標に移動させる。
Here, among the instructions given by the visitor UR's operation on the visitor terminal 4, the movement instruction to move the own avatar A to the second coordinate in the exhibition hall TH also includes an instruction to change the posture of the avatar A. It is.
Therefore, the avatar moving unit 53 changes the posture based on the change instruction and moves the avatar A to the above-mentioned second coordinates.
 仮想空間配置部54は、ブースBが第1座標に配置された展示ホールTHを3次元の仮想空間に配置させると共に、アバタAを第2座標に配置させる。
 また、仮想空間配置部54は、出展者アバタ生成部52により生成された、現実の出展者や所定のモデルが撮像された写真画像に基づいて、当該写真画像を出展者アバタASとして第2座標に配置させることができる。
 また、仮想空間配置部54は、来場者アバタARの姿勢に応じた向きで、出展者アバタASを配置させる。具体的には例えば、来場者アバタARの位置及び姿勢からみたアバタ視点画像GAにおいて、出展者アバタASが配置される位置の出展者アバタASが正対して含まれるように、出展者アバタを配置する。
The virtual space placement unit 54 places the exhibition hall TH, in which the booth B is placed at the first coordinates, in the three-dimensional virtual space, and places the avatar A at the second coordinates.
Further, based on the photographic image of the real exhibitor or a predetermined model generated by the exhibitor avatar generating section 52, the virtual space arrangement section 54 sets the photographic image as an exhibitor avatar AS at second coordinates. can be placed in
Further, the virtual space arrangement unit 54 arranges the exhibitor avatar AS in an orientation corresponding to the posture of the visitor avatar AR. Specifically, for example, the exhibitor avatar is arranged so that the exhibitor avatar AS at the position where the exhibitor avatar AS is arranged is included in the avatar viewpoint image GA seen from the position and posture of the visitor avatar AR, facing directly. do.
 アバタ視点画像生成部55は、第1座標にブースBが配置された展示ホールTH、及び、第2座標のアバタAが夫々配置された仮想空間において、第2座標に基づいて設定される視点から撮像した3次元画像を、アバタ視点画像GAとして生成する。
 これにより、出展者アバタASは、来場者アバタARの位置や姿勢によらず、アバタ視点画像GAに正対して含まれるようになる。
The avatar viewpoint image generation unit 55 generates an image from a viewpoint set based on the second coordinates in the exhibition hall TH where the booth B is located at the first coordinates and in the virtual space where the avatar A is located at the second coordinates. The captured three-dimensional image is generated as an avatar viewpoint image GA.
As a result, the exhibitor avatar AS is included in the avatar viewpoint image GA, facing directly, regardless of the position and posture of the visitor avatar AR.
 表示制御部56は、図1に示すように、アバタ視点画像GAを来場者端末4に表示させる制御を実行する。 The display control unit 56 executes control to display the avatar viewpoint image GA on the visitor terminal 4, as shown in FIG.
 このようにして、展示ホール設置部51、出展者アバタ生成部52、アバタ移動部53、仮想空間配置部54、アバタ視点画像生成部55及び表示制御部56の上述した一連の処理の実行により、上述の「2Dアバタの自動方向変更機能」が実現される。
 即ち、2Dアバタの自動方向変更機能とは、来場者端末4において移動された来場者アバタARの位置や姿勢に基づいて、主催者端末2又は展示者端末3において設定された実写型2Dアバタ(2次元の写真画像)が仮想空間に配置され、アバタ視点画像GAを生成等する機能である。
 なお、展示ホールTHにおける複数の来場者の夫々に対して正対するようにアバタ視点画像GAが提供される。即ち、複数の来場者の夫々にとっての仮想空間は、少なくとも出展者アバタASについて異なる別の仮想空間であると言える。
In this way, by executing the series of processes described above by the exhibition hall installation section 51, exhibitor avatar generation section 52, avatar movement section 53, virtual space arrangement section 54, avatar viewpoint image generation section 55, and display control section 56, The above-mentioned "2D avatar automatic direction change function" is realized.
In other words, the automatic direction change function of the 2D avatar means that the live-action 2D avatar ( This is a function to place a two-dimensional photographic image) in a virtual space and generate an avatar viewpoint image GA.
Note that the avatar viewpoint image GA is provided so as to directly face each of the plurality of visitors in the exhibition hall TH. That is, it can be said that the virtual space for each of the plurality of visitors is a different virtual space that is different for at least the exhibitor avatar AS.
 セリフ取得部57は、出展者アバタASから出力させるセリフを取得する。
 具体的には例えば、セリフ取得部57は、図6に示す誘導員キャラクターの「誘導員の誘導メッセージ」欄に入力された文字列をセリフとして取得する。
 セリフ出力部58は、セリフ取得部57において取得されたセリフを、ブースBの中に存在する出展者アバタASから発せられたセリフとして、来場者アバタARに対応する来場者URの来場者端末4から出力させる制御を実行する。
 具体的には例えば、セリフ出力部58は、図1に示すように、セリフを出展者アバタASの吹き出しとしてアバタ視点画像GAに重畳して表示することで出力させる。
The line acquisition unit 57 acquires lines to be output from the exhibitor avatar AS.
Specifically, for example, the line acquisition unit 57 acquires, as a line, a character string input in the "guidance message for guide message" field of the guide character shown in FIG. 6 .
The dialogue output unit 58 outputs the dialogue acquired by the dialogue acquisition unit 57 as the dialogue uttered from the exhibitor avatar AS present in the booth B to the visitor terminal 4 of the visitor UR corresponding to the visitor avatar AR. Execute control to output from.
Specifically, for example, as shown in FIG. 1, the dialogue output unit 58 outputs the dialogue by displaying it as a speech bubble of the exhibitor avatar AS superimposed on the avatar viewpoint image GA.
 また例えば、セリフ出力部58は、文字列として入力されたセリフを、所定のAI等により音読させた音声として、来場者端末4から出力させる制御を実行することもできる。
 また例えば、セリフ取得部57はセリフを音声データとして取得し、セリフ出力部58は来場者端末4から音声として出力させる制御を実行することもできる。
For example, the dialogue output unit 58 can also perform control to output dialogue input as a character string from the visitor terminal 4 as a voice read aloud by a predetermined AI or the like.
Furthermore, for example, the line acquisition unit 57 may acquire the line as audio data, and the line output unit 58 may perform control to output the line as audio from the visitor terminal 4.
 また例えば、セリフ取得部57は、図示せぬチャットボットエンジンやAI型チャットエンジンからセリフを取得することができる。このとき、チャットボットエンジンやAI型チャットエンジンには、ブースBにきた来場者URの氏名やその時刻、リード情報、行動履歴、チャット履歴等のデータが提供される。そして、チャットボットエンジンやAIチャットエンジンは、これらのデータに基づいて、セリフを生成する。
 これにより、例えば、上場企業に所属する来場者URと、中小企業に所属する来場者URとで、異なるセリフがセリフ取得部57により取得される。
 その結果、来場者URは、自身の状況に合わせた好適なセリフで、出展者アバタASから話しかけられることになるため、ブースBでの出展者アバタUSでの対応はより好適なものとなる。
Further, for example, the line acquisition unit 57 can acquire lines from a chatbot engine or an AI chat engine (not shown). At this time, data such as the name and time of the visitor UR who came to booth B, lead information, action history, chat history, etc. are provided to the chatbot engine or the AI chat engine. The chatbot engine or AI chat engine then generates lines based on this data.
As a result, different lines are acquired by the line acquisition unit 57 for, for example, a visitor UR belonging to a listed company and a visitor UR belonging to a small to medium-sized company.
As a result, the exhibitor avatar AS will speak to the exhibitor avatar AS using appropriate lines suited to the visitor's situation, so that the response from the exhibitor avatar US at booth B will be more appropriate.
 なお、図示はしないが、出展者UTは、来場者端末4から出力される音声を音声の形態で録音させてセリフとして取得させてもよい。即ち、この場合、セリフ取得部57は、出展者端末3を介して録音された音声をセリフとして取得する。そして、セリフ出力部58は、取得された音声のセリフを、来場者端末4から出力させる。
 このように、出展者UTが、自身の設定した写真画像を出展者アバタASに利用させる機能を、以下、「出展者アバタセリフ機能」と呼ぶ。
Although not shown, the exhibitor UT may record the audio output from the visitor terminal 4 in the form of audio and obtain it as a line. That is, in this case, the line acquisition unit 57 acquires the voice recorded via the exhibitor terminal 3 as the line. Then, the dialogue output unit 58 causes the visitor terminal 4 to output the acquired audio dialogue.
The function in which the exhibitor UT allows the exhibitor avatar AS to use the photo image set by the exhibitor UT is hereinafter referred to as the "exhibitor avatar line function."
 ここで、来場者UR(そのアバタA)は、在席中のブースBに行くことで、担当者とチャットで会話することができる。
 なお、このようなチャットでの会話を実現すべく、図5のチャット制御部59がサーバ1のCPU11に設けられている。
 即ち、チャット制御部59は、ブースBに存在するアバタAに対応する来場者URと、当該ブースBに在席中の担当者との間でチャットを行うために、来場者URが操作する来場者端末4と、担当者が操作する出展者端末3又は他の端末との間の通信を制御する。
 なお、以下、ブースBに存在するアバタAに対応する来場者URと、当該ブースBに在席中の担当者との間でチャットを行う機能を「チャット機能」と呼ぶ。
 ここで、出展者アバタASには出展者の複数の担当者を割り当てることができる。来場者URが出展者アバタASに対して話しかけるアクションをすると、割り当てられた出展者UTの担当者が管理画面でテキストチャット、音声チャット、ビデオチャット等で対応することができる。
 即ち、出展者アバタASには管理画面で出展担当者が紐付けられている。来場者URは、話しかけた出展者アバタASに紐づけられた人(出展者UTの担当者)と音声チャット、テキストチャット、ビデオチャットなどを行うことができる。なお、紐づけられる担当者は1名であってもよい。
Here, the visitor UR (its avatar A) can chat with the person in charge by going to the booth B where he/she is present.
Note that in order to realize such a chat conversation, a chat control section 59 shown in FIG. 5 is provided in the CPU 11 of the server 1.
That is, the chat control unit 59 controls the visitor UR operated by the visitor UR in order to chat between the visitor UR corresponding to the avatar A existing in the booth B and the person in charge present at the booth B. It controls communication between the exhibitor terminal 4 and the exhibitor terminal 3 or other terminal operated by the person in charge.
Note that, hereinafter, the function of chatting between the visitor UR corresponding to avatar A present in booth B and the person in charge present at booth B will be referred to as a "chat function".
Here, a plurality of exhibitor representatives can be assigned to the exhibitor avatar AS. When the visitor UR takes an action to talk to the exhibitor avatar AS, the person in charge of the assigned exhibitor UT can respond via text chat, voice chat, video chat, etc. on the management screen.
That is, the exhibitor avatar AS is associated with the exhibitor person on the management screen. The visitor UR can have a voice chat, text chat, video chat, etc. with the person (the person in charge of the exhibitor UT) associated with the exhibitor avatar AS he/she talked to. Note that only one person may be linked.
 軌跡収集部60は、図3の来場者UR-1乃至UR-mの夫々が操作するアバタAの夫々の展示ホールTH内における移動の軌跡を示す軌跡データを取得して、来場者UR-1乃至UR-mの夫々の軌跡データを集計して所定の分析を行う。
 出展側対応制御部61は、軌跡データに基づいてブースBに滞在する来場者アバタARの滞在時間を計測し、滞在時間が所定以上になったとき、出展者端末3に対して所定通知をする制御と、来場者端末4において出展者アバタASから所定出力をさせる制御とのうち少なくとも一方を実行する。
The trajectory collection unit 60 acquires trajectory data indicating the locus of movement within the exhibition hall TH of each avatar A operated by each of the visitors UR-1 to UR-m in FIG. The locus data of each of UR-m to UR-m is aggregated and a predetermined analysis is performed.
The exhibitor-side control unit 61 measures the stay time of the visitor avatar AR staying at the booth B based on the trajectory data, and when the stay time exceeds a predetermined value, sends a predetermined notification to the exhibitor terminal 3. At least one of the control and the control for causing the exhibitor avatar AS to output a predetermined output at the visitor terminal 4 is executed.
 即ち、出展者アバタ生成部52は、図7に示すUIを出展者UTの出展者端末3に提示させ、滞在時間が所定以上の来場者URに対して対応を行うことを促すことができる。
 図7は、図5の機能的構成のサーバの制御により出展者端末に表示される画像のうち、出展者に対して来場者の情報を提供する画面を示す画像の一例を示す図である。
That is, the exhibitor avatar generation unit 52 can cause the exhibitor terminal 3 of the exhibitor UT to present the UI shown in FIG. 7, and can urge the exhibitor UT to take a response to the visitor UR who has stayed longer than a predetermined time.
FIG. 7 is a diagram showing an example of an image showing a screen for providing visitor information to exhibitors, among images displayed on an exhibitor terminal under the control of the server having the functional configuration of FIG. 5.
 図7においては、図6の「来場者一覧」メニューが選択されている。
 図7に示すように、来場者一覧においては、ブースBにきた来場者URの氏名やその時刻、リード情報、行動履歴、チャット履歴等が表示される。
 そして、図示はしないが、来場者URが所定時間ブースBに滞在した場合、その来場者URは、そのブースBの内容に興味があると考えられる。そこで、本サービスでは、所定時間ブースBに来場者アバタARが滞在した場合、来場者URへの対応が促されるのである。
 このように、以下、出展者UTに対して、そのブースBに来た来場者URへの対応を促す(誘導する)機能を、「出展側による対応誘導機能」と呼ぶ。
In FIG. 7, the "visitor list" menu of FIG. 6 is selected.
As shown in FIG. 7, in the visitor list, the name and time of the visitor UR who came to booth B, lead information, action history, chat history, etc. are displayed.
Although not shown, if a visitor UR stays at booth B for a predetermined time, it is considered that the visitor UR is interested in the contents of booth B. Therefore, in this service, when the visitor avatar AR stays at booth B for a predetermined time, the visitor UR is prompted to respond.
Hereinafter, the function of prompting (guiding) the exhibitor UT to respond to the visitor UR who has come to the booth B will be referred to as the "response guidance function by the exhibitor."
 また例えば、出展側対応制御部61は、所定時間ブースBに滞在した来場者URに対して、所定のメッセージを出展者アバタASを介して話かけてもよい。即ち例えば、「気になることはございますか?」等のメッセージが、来場者URに提供される。これにより、来場者URとのチャットが開始すると共に、出展者UTにとっては、その返答から対応をするように促される。 Also, for example, the exhibitor side control unit 61 may speak a predetermined message to the visitor UR who has stayed at the booth B for a predetermined time via the exhibitor avatar AS. That is, for example, a message such as "Is there anything you are concerned about?" is provided to the visitor UR. This starts a chat with the visitor UR, and the exhibitor UT is prompted to take action based on the response.
 なお、仮想空間配置部54は、アテンド目的の所定のトリガにより、ブースB内の位置とは異なる位置であって、アバタ視点画像GAに出展者アバタASが被写体として含まれる位置に、出展者アバタを配置させることができる。
 具体的には、仮想空間配置部54は、図8に示すように、ブースB以外の位置に来場者アバタARが位置する場合において、アバタ視点画像GAに、出展者アバタASを配置することができる。このとき、配置された出展者アバタASから所定のセリフが出力される。
In addition, the virtual space arrangement unit 54, in response to a predetermined trigger for the purpose of attending, places the exhibitor avatar at a position different from the position in the booth B and where the exhibitor avatar AS is included as a subject in the avatar viewpoint image GA. can be placed.
Specifically, as shown in FIG. 8, when the visitor avatar AR is located at a position other than booth B, the virtual space placement unit 54 may place the exhibitor avatar AS in the avatar viewpoint image GA. can. At this time, a predetermined line is output from the arranged exhibitor avatar AS.
 図8は、図5の機能的構成のサーバの制御により出展者端末に表示される画像のうち、来場者に対するアテンドが含まれる画面を示す画像の一例を示す図である。
 即ち、図8において、来場者アバタARは、ブースB以外の通路部に位置している。そして、アバタ視点画像GAには、ブースBの出展者アバタAS以外の出展者アバタAS1が配置されている。そして、出展者アバタAS1は、「次は右奥のXXXブースへ行ってみてください。面白いMAがありますよ。」というセリフを吹き出しの形態で発している。
FIG. 8 is a diagram showing an example of an image showing a screen including an attendant for a visitor among images displayed on an exhibitor terminal under the control of the server having the functional configuration of FIG. 5.
That is, in FIG. 8, the visitor avatar AR is located in a passageway other than booth B. In the avatar viewpoint image GA, an exhibitor avatar AS1 other than the exhibitor avatar AS of the booth B is arranged. Then, exhibitor avatar AS1 utters the following line in the form of a speech bubble: ``Next time, please go to the XXX booth in the back right.There will be an interesting MA.''
 このように、来場者がアテンドを受けるべきタイミングに、アバタ視点画像GAには、アテンドをしてくれる出展者アバタAS1が含まれて表示される。
 そして、出展者アバタAS1により紹介されるブースBは、自動でのアテンドの場合、AI等により選択されたブースBの紹介や興味があると考えられる次に行くべきブースなどをアテンドする。
 なお、来場者URは、所定操作により、アテンドの出展者アバタAS1を非表示とすることもできる。
 このように、来場者URに対してアテンドさせる機能を、以下、「2Dアバタによるアテンド機能」と呼ぶ。
In this way, the avatar viewpoint image GA is displayed including the exhibitor avatar AS1 who will attend the visitor at the timing when the visitor should be attended to.
Then, in the case of automatic attendance, the booth B introduced by the exhibitor avatar AS1 will introduce the booth B selected by AI or the like or attend the next booth that is considered to be of interest.
Note that the visitor UR can also hide the attending exhibitor avatar AS1 by performing a predetermined operation.
The function of attending to the visitor UR in this way is hereinafter referred to as the "attending function using a 2D avatar."
 以上、本発明の一実施形態について説明したが、本発明は、上述の実施形態に限定されるものではなく、本発明の目的を達成できる範囲での変形、改良等は本発明に含まれるものとみなす。 Although one embodiment of the present invention has been described above, the present invention is not limited to the above-described embodiment, and modifications, improvements, etc. within the range that can achieve the purpose of the present invention are included in the present invention. regarded as.
 図3に示すシステム構成、及び図4に示すサーバ1のハードウェア構成は、本発明の目的を達成するための例示に過ぎず、特に限定されない。 The system configuration shown in FIG. 3 and the hardware configuration of the server 1 shown in FIG. 4 are merely examples for achieving the object of the present invention, and are not particularly limited.
 また、図5に示す機能ブロック図は、例示に過ぎず、特に限定されない。即ち、上述した処理を全体として実行できる機能が図3の情報処理システムに備えられていれば足り、この機能を実現するためにどのような機能ブロック及びデータベースを用いるのかは、特に図5の例に限定されない。 Furthermore, the functional block diagram shown in FIG. 5 is merely an example and is not particularly limited. In other words, it is sufficient that the information processing system shown in FIG. 3 is equipped with a function that can execute the above-mentioned processing as a whole, and what kind of functional blocks and databases are used to realize this function is determined in particular by the example shown in FIG. but not limited to.
 また、機能ブロック及びデータベースの存在場所も、図5に限定されず、任意でよい。
 図5の例では、全ての処理は、図5の情報処理システムを構成する図5のサーバ1のCPU11の制御により行われる構成となっているが、これに限定されない。例えばサーバ1側に配置された機能ブロック及びデータベースの少なくとも一部を、主催者端末2側、出展者端末3側、来場者端末4側、又は図示せぬ他の情報処理装置が備える構成としてもよい。
Further, the locations of the functional blocks and the database are not limited to those shown in FIG. 5, and may be arbitrary.
In the example of FIG. 5, all processing is performed under the control of the CPU 11 of the server 1 of FIG. 5 that constitutes the information processing system of FIG. 5, but the configuration is not limited thereto. For example, at least part of the functional blocks and database located on the server 1 side may be provided on the organizer terminal 2 side, the exhibitor terminal 3 side, the visitor terminal 4 side, or another information processing device (not shown). good.
 また、上述した一連の処理は、ハードウェアにより実行させることもできるし、ソフトウェアにより実行させることもできる。
 また、1つの機能ブロックは、ハードウェア単体で構成してもよいし、ソフトウェア単体で構成してもよいし、それらの組み合わせで構成してもよい。
Furthermore, the series of processes described above can be executed by hardware or by software.
Further, one functional block may be configured by a single piece of hardware, a single piece of software, or a combination thereof.
 一連の処理をソフトウェアにより実行させる場合には、そのソフトウェアを構成するプログラムが、コンピュータ等にネットワークや記録媒体からインストールされる。
 コンピュータは、専用のハードウェアに組み込まれているコンピュータであってもよい。
 また、コンピュータは、各種のプログラムをインストールすることで、各種の機能を実行することが可能なコンピュータ、例えばサーバの他汎用のスマートフォンやパーソナルコンピュータであってもよい。
When a series of processes is executed by software, a program constituting the software is installed on a computer or the like from a network or a recording medium.
The computer may be a computer built into dedicated hardware.
Further, the computer may be a computer that can execute various functions by installing various programs, such as a server, a general-purpose smartphone, or a personal computer.
 このようなプログラムを含む記録媒体は、ユーザにプログラムを提供するために装置本体とは別に配布される図示せぬリムーバブルメディアにより構成されるだけでなく、装置本体に予め組み込まれた状態でユーザに提供される記録媒体等で構成される。 Recording media containing such programs not only consist of removable media (not shown) that is distributed separately from the main body of the device in order to provide the program to the user, but also are provided to the user in a state that is pre-installed in the main body of the device. Consists of provided recording media, etc.
 なお、本明細書において、記録媒体に記録されるプログラムを記述するステップは、その順序に沿って時系列的に行われる処理はもちろん、必ずしも時系列的に処理されなくとも、並列的あるいは個別に実行される処理をも含むものである。 Note that in this specification, the step of writing a program to be recorded on a recording medium is not only a process that is performed chronologically in accordance with the order, but also a process that is not necessarily performed chronologically but in parallel or individually. It also includes the processing to be executed.
 以上をまとめると、本発明が適用される情報処理装置は、次のような構成を有していれば足り、各種各様な実施の形態を取ることができる。 To summarize the above, an information processing device to which the present invention is applied only needs to have the following configuration, and can take various embodiments.
 即ち、本発明が適用される情報処理装置(例えば図3のサーバ1)は、
 ブース(図1のブースB)が配置された展示ホール(展示ホールTH)、前記ブースの中に出展者として存在する出展者アバタ(図1の出展者アバタAS)、及び前記展示ホール内を来場者として移動可能な来場者アバタ(図1の来場者アバタAR)を、3次元の仮想空間に夫々配置させる配置手段と、
 第1デバイス(図2の出展者端末3)に対する第1ユーザ(出展者UT)の操作により行われる指示のうち、前記出展者アバタを生成する第1指示に基づいて、前記出展者アバタを、2次元情報で構成されるアバタとして生成する出展者アバタ生成手段と、
 第2デバイス(図2の来場者端末4)に対する第2ユーザ(来場者UR)の操作により行われる指示のうち、前記展示ホール内で前記アバタを所定位置に移動させる第2指示に基づいて、前記仮想空間における前記アバタを当該所定位置に移動させる来場者アバタ移動手段と、
 2次元情報から構成される前記出展者アバタが定位置に存在する前記ブースが配置された前記展示ホール、及び、前記所定位置の前記来場者アバタが夫々配置された前記仮想空間において、前記座標に基づいて設定される視点から撮像した3次元画像を、来場者視点画像(例えば図1のアバタ視点画像GA)として生成する画像生成手段と、
 前記来場者視点画像を前記第2デバイスに表示させる制御を実行する表示制御手段と、
 を備える。
 これにより、上述の「2Dアバタによる出展者のアバタ表示機能」が実現される。
 その結果、3次元の仮想空間内についてのアバタ視点画像を用いるシステムにおける利便性を向上させることができる。
That is, the information processing device (for example, the server 1 in FIG. 3) to which the present invention is applied,
The exhibition hall (exhibition hall TH) where the booth (booth B in Figure 1) is located, the exhibitor avatar (exhibitor avatar AS in Figure 1) existing as an exhibitor in the booth, and the visitor inside the exhibition hall. arrangement means for respectively arranging visitor avatars (visitor avatar AR in FIG. 1) that are movable as participants in a three-dimensional virtual space;
Based on the first instruction to generate the exhibitor avatar among the instructions performed by the first user (exhibitor UT) on the first device (exhibitor terminal 3 in FIG. 2), the exhibitor avatar is created by: exhibitor avatar generation means for generating an avatar made up of two-dimensional information;
Based on the second instruction to move the avatar to a predetermined position in the exhibition hall, among the instructions performed by the second user (visitor UR) on the second device (visitor terminal 4 in FIG. 2), visitor avatar moving means for moving the avatar to the predetermined position in the virtual space;
In the exhibition hall where the booth is located where the exhibitor avatar composed of two-dimensional information is located at a fixed position, and in the virtual space where the visitor avatar at the predetermined position is located, the coordinates are image generation means for generating a three-dimensional image captured from a viewpoint set based on the visitor viewpoint image (for example, the avatar viewpoint image GA in FIG. 1);
a display control unit that executes control to display the visitor viewpoint image on the second device;
Equipped with.
As a result, the above-mentioned "exhibitor avatar display function using 2D avatars" is realized.
As a result, it is possible to improve the usability of a system that uses avatar viewpoint images in a three-dimensional virtual space.
 本発明が適用される情報処理装置において、
 前記出展者アバタから出力させるセリフを取得するセリフ取得手段と、
 取得された前記セリフを、前記ブースの中に存在する前記来場者アバタに対応する前記第1ユーザの前記第1デバイスから出力させる制御を実行するセリフ出力手段と、
 をさらに備えるようにすることもできる。
 これにより、上述の「出展者アバタのセリフ出力機能」が実現される。
In an information processing device to which the present invention is applied,
a line acquisition means for acquiring lines to be output from the exhibitor avatar;
a line output unit that executes control to output the acquired line from the first device of the first user corresponding to the visitor avatar present in the booth;
It is also possible to further include the following.
As a result, the above-mentioned "exhibitor avatar's dialogue output function" is realized.
 本発明が適用される情報処理装置において、
 前記第2指示は、前記アバタの姿勢の変更指示を含み、
 前記アバタ移動手段は、前記変更指示に基づく姿勢に変更して前記アバタを前記第2座標に移動させ、
 前記配置手段は、前記ブースの中に存在する前記来場者アバタの前記姿勢に応じた向きで、前記出展者アバタを前記定位置に配置させるようにすることもできる。
 これにより、上述の「2Dアバタの自動方向変更機能」が実現される。
In an information processing device to which the present invention is applied,
The second instruction includes an instruction to change the posture of the avatar,
The avatar moving means changes the posture based on the change instruction and moves the avatar to the second coordinates,
The arrangement means may arrange the exhibitor avatar at the predetermined position in an orientation corresponding to the posture of the visitor avatar present in the booth.
As a result, the above-mentioned "2D avatar automatic direction change function" is realized.
 本発明が適用される情報処理装置において、
 前記第1指示は、前記第1ユーザにより予め用意された用意画像を含み、
 前記出展者アバタ生成手段は、当該用意画像に基づいて前記出展者アバタを生成するようにすることもできる。
 これにより、上述の「出展者アバタ設定機能」が実現される。
In an information processing device to which the present invention is applied,
The first instruction includes a prepared image prepared in advance by the first user,
The exhibitor avatar generation means may also generate the exhibitor avatar based on the prepared image.
As a result, the above-mentioned "exhibitor avatar setting function" is realized.
 本発明が適用される情報処理装置において、
 前記ブースに存在する前記来場者アバタに対応する前記第2ユーザと、前記出展者アバタに対応付けられた前記担当者との間でチャットを行うために、当該第2ユーザが操作する前記第2デバイスと、当該担当者が操作する前記第1デバイス又は別デバイスとの間で通信する制御を実行するチャット制御手段をさらに備えるようにすることができる。
 これにより、上述の「出展側担当者割当てチャット機能」が実現される。
In an information processing device to which the present invention is applied,
The second user operates the second user to chat between the second user corresponding to the visitor avatar present at the booth and the person in charge associated with the exhibitor avatar. It is possible to further include chat control means for controlling communication between the device and the first device or another device operated by the person in charge.
As a result, the above-mentioned "exhibitor-side person-in-charge assignment chat function" is realized.
 本発明が適用される情報処理装置において、
 前記第2ユーザに対応する前記来場者アバタの前記展示ホール内における移動の軌跡を示す軌跡データを収集する軌跡収集手段と、
 前記軌跡データに基づいて前記ブースに滞在する前記来場者アバタの滞在時間を計測し、当該滞在時間が所定以上になったとき、前記第1デバイスに対して所定通知をする制御と、前記第2デバイスにおいて前記出展者アバタから所定出力をさせる制御とのうち少なくとも一方を実行する出展側対応制御手段と、
 をさらに備えるようにすることができる。
 これにより、上述の「出展側による対応誘導機能」が実現される。
In an information processing device to which the present invention is applied,
trajectory collection means for collecting trajectory data indicating a movement trajectory of the visitor avatar corresponding to the second user in the exhibition hall;
control for measuring the stay time of the visitor avatar staying at the booth based on the trajectory data, and when the stay time exceeds a predetermined value, notifying the first device of a predetermined value; exhibitor-side control means for executing at least one of the following: controlling the device to cause the exhibitor avatar to output a predetermined output;
It is possible to further include the following.
As a result, the above-mentioned "function for guiding exhibitors to respond" is realized.
 本発明が適用される情報処理装置において、
 前記配置手段は、アテンド目的の所定のトリガにより、前記ブース内の前記定位置とは異なる位置であって、前記来場者視点画像に前記出展者アバタが被写体として含まれる位置に、当該出展者アバタを配置させるようにすることもできる。
 これにより、上述の「2Dアバタによるアテンド機能」が実現される。
In an information processing device to which the present invention is applied,
The arrangement means, in response to a predetermined trigger for the purpose of attending, places the exhibitor avatar at a position in the booth different from the fixed position and where the exhibitor avatar is included as a subject in the visitor viewpoint image. It is also possible to have it placed.
Thereby, the above-mentioned "attend function using 2D avatar" is realized.
 1・・・サーバ、2・・・主催者端末、3・・・出展者端末、4・・・来場者端末、11・・・CPU、20・・・ドライブ、31・・・リムーバブルメディア、51・・・展示ホール設置部、52・・・出展者アバタ生成部、53・・・アバタ移動部、54・・・仮想空間配置部、55・・・アバタ視点画像生成部、56・・・表示制御部、57・・・セリフ取得部、58・・・セリフ出力部、59・・・チャット制御部、60・・・軌跡収集部、61・・・出展側対応制御部 DESCRIPTION OF SYMBOLS 1... Server, 2... Organizer terminal, 3... Exhibitor terminal, 4... Visitor terminal, 11... CPU, 20... Drive, 31... Removable media, 51 . . . Exhibition hall installation unit, 52 . Control unit, 57... Line acquisition unit, 58... Line output unit, 59... Chat control unit, 60... Trajectory collection unit, 61... Exhibitor side compatible control unit

Claims (9)

  1.  ブースが配置された展示ホール、前記ブースの中に出展者として存在する出展者アバタ、及び前記展示ホール内を来場者として移動可能な来場者アバタを、3次元の仮想空間に夫々配置させる配置手段と、
     第1デバイスに対する第1ユーザの操作により行われる指示のうち、前記出展者アバタを生成する第1指示に基づいて、前記出展者アバタを、2次元情報で構成されるアバタとして生成する出展者アバタ生成手段と、
     第2デバイスに対する第2ユーザの操作により行われる指示のうち、前記展示ホール内で前記アバタを所定位置に移動させる第2指示に基づいて、前記仮想空間における前記アバタを当該所定位置に移動させる来場者アバタ移動手段と、
     2次元情報から構成される前記出展者アバタが定位置に存在する前記ブースが配置された前記展示ホール、及び、前記所定位置の前記来場者アバタが夫々配置された前記仮想空間において、前記座標に基づいて設定される視点から撮像した3次元画像を、来場者視点画像として生成する画像生成手段と、
     前記来場者視点画像を前記第2デバイスに表示させる制御を実行する表示制御手段と、
     を備える情報処理装置。
    Arrangement means for arranging, in a three-dimensional virtual space, an exhibition hall in which a booth is arranged, an exhibitor avatar existing as an exhibitor in the booth, and a visitor avatar movable as a visitor within the exhibition hall. and,
    An exhibitor avatar that generates the exhibitor avatar as an avatar composed of two-dimensional information based on a first instruction to generate the exhibitor avatar among instructions performed by a first user's operation on a first device. A generating means,
    A visitor who moves the avatar to a predetermined position in the virtual space based on a second instruction to move the avatar to a predetermined position in the exhibition hall among instructions performed by a second user's operation on a second device. person avatar transportation means,
    In the exhibition hall where the booth is located where the exhibitor avatar composed of two-dimensional information is located at a fixed position, and in the virtual space where the visitor avatar at the predetermined position is located, the coordinates are image generation means for generating a three-dimensional image captured from a viewpoint set based on the image as a visitor viewpoint image;
    a display control unit that executes control to display the visitor viewpoint image on the second device;
    An information processing device comprising:
  2.  前記出展者アバタから出力させるセリフを取得するセリフ取得手段と、
     取得された前記セリフを、前記ブースの中に存在する前記来場者アバタに対応する前記第1ユーザの前記第1デバイスから出力させる制御を実行するセリフ出力手段と、
     をさらに備える請求項1に記載の情報処理装置。
    a line acquisition means for acquiring lines to be output from the exhibitor avatar;
    a line output unit that executes control to output the acquired line from the first device of the first user corresponding to the visitor avatar present in the booth;
    The information processing device according to claim 1, further comprising:
  3.  前記第2指示は、前記アバタの姿勢の変更指示を含み、
     前記アバタ移動手段は、前記変更指示に基づく姿勢に変更して前記アバタを前記第2座標に移動させ、
     前記配置手段は、前記ブースの中に存在する前記来場者アバタの前記姿勢に応じた向きで、前記出展者アバタを前記定位置に配置させる、
     請求項1又は2に記載の情報処理装置。
    The second instruction includes an instruction to change the posture of the avatar,
    The avatar moving means changes the posture based on the change instruction and moves the avatar to the second coordinates,
    The placement means places the exhibitor avatar at the predetermined position in an orientation corresponding to the posture of the visitor avatar present in the booth.
    The information processing device according to claim 1 or 2.
  4.  前記第1指示は、前記第1ユーザにより予め用意された用意画像を含み、
     前記出展者アバタ生成手段は、当該用意画像に基づいて前記出展者アバタを生成する、
     請求項1乃至3のうち何れか1項に記載の情報処理装置。
    The first instruction includes a prepared image prepared in advance by the first user,
    The exhibitor avatar generation means generates the exhibitor avatar based on the prepared image.
    An information processing device according to any one of claims 1 to 3.
  5.  前記第1ユーザに属するK人(Nは1以上の整数値)の担当者のうち1人以上が前記出展者アバタに対応付けられており、
     前記ブースに存在する前記来場者アバタに対応する前記第2ユーザと、前記出展者アバタに対応付けられた前記担当者との間でチャットを行うために、当該第2ユーザが操作する前記第2デバイスと、当該担当者が操作する前記第1デバイス又は別デバイスとの間で通信する制御を実行するチャット制御手段をさらに備える、
     請求項1乃至4のうち何れか1項に記載の情報処理装置。
    One or more of the K people (N is an integer value of 1 or more) belonging to the first user is associated with the exhibitor avatar,
    The second user operates the second user to chat between the second user corresponding to the visitor avatar present at the booth and the person in charge associated with the exhibitor avatar. further comprising chat control means for controlling communication between the device and the first device or another device operated by the person in charge;
    An information processing device according to any one of claims 1 to 4.
  6.  前記第2ユーザに対応する前記来場者アバタの前記展示ホール内における移動の軌跡を示す軌跡データを収集する軌跡収集手段と、
     前記軌跡データに基づいて前記ブースに滞在する前記来場者アバタの滞在時間を計測し、当該滞在時間が所定以上になったとき、前記第1デバイスに対して所定通知をする制御と、前記第2デバイスにおいて前記出展者アバタから所定出力をさせる制御とのうち少なくとも一方を実行する出展側対応制御手段と、
     をさらに備える請求項1乃至5のうち何れか1項に記載の情報処理装置。
    trajectory collection means for collecting trajectory data indicating a movement trajectory of the visitor avatar corresponding to the second user in the exhibition hall;
    control for measuring the stay time of the visitor avatar staying at the booth based on the trajectory data, and when the stay time exceeds a predetermined value, notifying the first device of a predetermined value; exhibitor-side control means for executing at least one of the following: controlling the device to cause the exhibitor avatar to output a predetermined output;
    The information processing device according to any one of claims 1 to 5, further comprising:
  7.  前記配置手段は、アテンド目的の所定のトリガにより、前記ブース内の前記定位置とは異なる位置であって、前記来場者視点画像に前記出展者アバタが被写体として含まれる位置に、当該出展者アバタを配置させる、
     請求項1乃至6のうち何れか1項に記載の情報処理装置。
    The arrangement means, in response to a predetermined trigger for the purpose of attending, places the exhibitor avatar at a position in the booth different from the fixed position and where the exhibitor avatar is included as a subject in the visitor viewpoint image. to place,
    An information processing device according to any one of claims 1 to 6.
  8.  情報処理装置が実行する情報処理方法において、
     ブースが配置された展示ホール、前記ブースの中に出展者として存在する出展者アバタ、及び前記展示ホール内を来場者として移動可能な来場者アバタを、3次元の仮想空間に夫々配置させる配置ステップと、
     第1デバイスに対する第1ユーザの操作により行われる指示のうち、前記出展者アバタを生成する第1指示に基づいて、前記出展者アバタを、2次元情報で構成されるアバタとして生成する出展者アバタ生成ステップと、
     第2デバイスに対する第2ユーザの操作により行われる指示のうち、前記展示ホール内で前記アバタを所定位置に移動させる第2指示に基づいて、前記仮想空間における前記アバタを当該所定位置に移動させる来場者アバタ移動ステップと、
     2次元情報から構成される前記出展者アバタが定位置に存在する前記ブースが配置された前記展示ホール、及び、前記所定位置の前記来場者アバタが夫々配置された前記仮想空間において、前記座標に基づいて設定される視点から撮像した3次元画像を、来場者視点画像として生成する画像生成ステップと、
     前記来場者視点画像を前記第2デバイスに表示させる制御を実行する表示制御ステップと、
     を含む情報処理方法。
    In an information processing method executed by an information processing device,
    A placement step of placing an exhibition hall in which a booth is arranged, an exhibitor avatar existing as an exhibitor in the booth, and a visitor avatar that is movable as a visitor within the exhibition hall in a three-dimensional virtual space. and,
    An exhibitor avatar that generates the exhibitor avatar as an avatar composed of two-dimensional information based on a first instruction to generate the exhibitor avatar among instructions performed by a first user's operation on a first device. a generation step;
    A visitor who moves the avatar to a predetermined position in the virtual space based on a second instruction to move the avatar to a predetermined position in the exhibition hall among instructions performed by a second user's operation on a second device. user avatar movement step,
    In the exhibition hall where the booth is located where the exhibitor avatar composed of two-dimensional information is located at a fixed position, and in the virtual space where the visitor avatar at the predetermined position is located, the coordinates are an image generation step of generating a three-dimensional image captured from a viewpoint set based on the image as a visitor viewpoint image;
    a display control step of performing control to display the visitor viewpoint image on the second device;
    Information processing methods including.
  9.  コンピュータに、
     ブースが配置された展示ホール、前記ブースの中に出展者として存在する出展者アバタ、及び前記展示ホール内を来場者として移動可能な来場者アバタを、3次元の仮想空間に夫々配置させる配置ステップと、
     第1デバイスに対する第1ユーザの操作により行われる指示のうち、前記出展者アバタを生成する第1指示に基づいて、前記出展者アバタを、2次元情報で構成されるアバタとして生成する出展者アバタ生成ステップと、
     第2デバイスに対する第2ユーザの操作により行われる指示のうち、前記展示ホール内で前記アバタを所定位置に移動させる第2指示に基づいて、前記仮想空間における前記アバタを当該所定位置に移動させる来場者アバタ移動ステップと、
     2次元情報から構成される前記出展者アバタが定位置に存在する前記ブースが配置された前記展示ホール、及び、前記所定位置の前記来場者アバタが夫々配置された前記仮想空間において、前記座標に基づいて設定される視点から撮像した3次元画像を、来場者視点画像として生成する画像生成ステップと、
     前記来場者視点画像を前記第2デバイスに表示させる制御を実行する表示制御ステップと、
     を含む制御処理を実行させるプログラム。
    to the computer,
    A placement step of placing an exhibition hall in which a booth is arranged, an exhibitor avatar existing as an exhibitor in the booth, and a visitor avatar that is movable as a visitor within the exhibition hall in a three-dimensional virtual space. and,
    An exhibitor avatar that generates the exhibitor avatar as an avatar composed of two-dimensional information based on a first instruction to generate the exhibitor avatar among instructions performed by a first user's operation on a first device. a generation step;
    A visitor who moves the avatar to a predetermined position in the virtual space based on a second instruction to move the avatar to a predetermined position in the exhibition hall among instructions performed by a second user's operation on a second device. user avatar movement step,
    In the exhibition hall where the booth is located where the exhibitor avatar composed of two-dimensional information is located at a fixed position, and in the virtual space where the visitor avatar at the predetermined position is located, the coordinates are an image generation step of generating a three-dimensional image captured from a viewpoint set based on the image as a visitor viewpoint image;
    a display control step of performing control to display the visitor viewpoint image on the second device;
    A program that executes control processing including.
PCT/JP2023/012203 2022-03-29 2023-03-27 Information processing device, information processing method, and program WO2023190344A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-054298 2022-03-29
JP2022054298A JP2023146875A (en) 2022-03-29 2022-03-29 Information processing device, information processing method, and program

Publications (1)

Publication Number Publication Date
WO2023190344A1 true WO2023190344A1 (en) 2023-10-05

Family

ID=88201739

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/012203 WO2023190344A1 (en) 2022-03-29 2023-03-27 Information processing device, information processing method, and program

Country Status (2)

Country Link
JP (1) JP2023146875A (en)
WO (1) WO2023190344A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117591748A (en) * 2024-01-18 2024-02-23 北京笔中文化科技产业集团有限公司 Planning method and device for exhibition route and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001016563A (en) * 1999-04-16 2001-01-19 Nippon Telegr & Teleph Corp <Ntt> Three-dimensional common shaped virtual space display method, three-dimensional common shared virtual space communication system and method, virtual conference system and recording medium recording user terminal program for it
JP2004234054A (en) * 2003-01-28 2004-08-19 Dainippon Printing Co Ltd Virtual exhibition system
JP2007058672A (en) * 2005-08-25 2007-03-08 Adc Technology Kk Evaluation system and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001016563A (en) * 1999-04-16 2001-01-19 Nippon Telegr & Teleph Corp <Ntt> Three-dimensional common shaped virtual space display method, three-dimensional common shared virtual space communication system and method, virtual conference system and recording medium recording user terminal program for it
JP2004234054A (en) * 2003-01-28 2004-08-19 Dainippon Printing Co Ltd Virtual exhibition system
JP2007058672A (en) * 2005-08-25 2007-03-08 Adc Technology Kk Evaluation system and program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZIKU INC.: "ZIKU promotional video of Metaverse events platform", YOUTUBE, 13 November 2021 (2021-11-13), XP093095532, Retrieved from the Internet <URL:https://www.youtube.com/watch?v=_WBDwJ_eTEw> *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117591748A (en) * 2024-01-18 2024-02-23 北京笔中文化科技产业集团有限公司 Planning method and device for exhibition route and electronic equipment

Also Published As

Publication number Publication date
JP2023146875A (en) 2023-10-12

Similar Documents

Publication Publication Date Title
US11403595B2 (en) Devices and methods for creating a collaborative virtual session
US20180324229A1 (en) Systems and methods for providing expert assistance from a remote expert to a user operating an augmented reality device
US6753857B1 (en) Method and system for 3-D shared virtual environment display communication virtual conference and programs therefor
US11070768B1 (en) Volume areas in a three-dimensional virtual conference space, and applications thereof
US9305465B2 (en) Method and system for topic based virtual environments and expertise detection
US20180356885A1 (en) Systems and methods for directing attention of a user to virtual content that is displayable on a user device operated by the user
US11080941B2 (en) Intelligent management of content related to objects displayed within communication sessions
US20230128659A1 (en) Three-Dimensional Modeling Inside a Virtual Video Conferencing Environment with a Navigable Avatar, and Applications Thereof
US11456887B1 (en) Virtual meeting facilitator
US11609682B2 (en) Methods and systems for providing a communication interface to operate in 2D and 3D modes
KR102449460B1 (en) Method to provide customized virtual exhibition space construction service using augmented and virtual reality
AU2021366657B2 (en) A web-based videoconference virtual environment with navigable avatars, and applications thereof
US11651541B2 (en) Integrated input/output (I/O) for a three-dimensional (3D) environment
WO2023190344A1 (en) Information processing device, information processing method, and program
CN114237540A (en) Intelligent classroom online teaching interaction method and device, storage medium and terminal
JP3452348B2 (en) Speaker identification method in virtual space and recording medium storing the program
CN117806457A (en) Presentation in a multi-user communication session
Pazour et al. Virtual reality conferencing
US11928774B2 (en) Multi-screen presentation in a virtual videoconferencing environment
WO2023190343A1 (en) Information processing device, information processing method, and program
WO2022092122A1 (en) Information processing device
Rogers et al. BubbleVideo: Supporting Small Group Interactions in Online Conferences
TWI799195B (en) Method and system for implementing third-person perspective with a virtual object
US10469803B2 (en) System and method for producing three-dimensional images from a live video production that appear to project forward of or vertically above an electronic display
US20240031531A1 (en) Two-dimensional view of a presentation in a three-dimensional videoconferencing environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23780354

Country of ref document: EP

Kind code of ref document: A1