JP6567325B2 - Game program - Google Patents

Game program Download PDF

Info

Publication number
JP6567325B2
JP6567325B2 JP2015105005A JP2015105005A JP6567325B2 JP 6567325 B2 JP6567325 B2 JP 6567325B2 JP 2015105005 A JP2015105005 A JP 2015105005A JP 2015105005 A JP2015105005 A JP 2015105005A JP 6567325 B2 JP6567325 B2 JP 6567325B2
Authority
JP
Japan
Prior art keywords
object
virtual viewpoint
view
included
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2015105005A
Other languages
Japanese (ja)
Other versions
JP2015221211A (en
Inventor
亮二 角田
亮二 角田
洋平 三宅
洋平 三宅
Original Assignee
株式会社コロプラ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社コロプラ filed Critical 株式会社コロプラ
Priority to JP2015105005A priority Critical patent/JP6567325B2/en
Publication of JP2015221211A publication Critical patent/JP2015221211A/en
Application granted granted Critical
Publication of JP6567325B2 publication Critical patent/JP6567325B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Description

The present invention relates to a game program that controls the operation of a virtual camera in a virtual space in a game. More specifically, the present invention automatically tracks a virtual camera with respect to a player object that moves a virtual space by a user command, and The present invention relates to a game program that controls the position and orientation of a virtual camera in consideration of the existence of an object.

  A technique is known in which a virtual camera is placed behind a character (player object) as a main character in a virtual space in a game, and the movement is projected according to the movement of the character in the virtual space. In particular, for applications such as racing games, a technique is known in which a plurality of virtual cameras are provided in a virtual space, and images from a plurality of fields of view are selectively switched and displayed. In the prior art, in particular, efforts have been made to enhance the realistic sensation given to the user by devising the operation of such a virtual camera.

  For example, in Patent Document 1, when an enemy object exists near a player object moving on a given route in a competitive game, the player route is automatically changed to approach the enemy object. In conjunction with this, the virtual camera is tracked.

  By the way, in recent years, a smartphone game in which a user enjoys a game using a touch screen of his / her smartphone has been widespread. As a user operation using a smartphone, the user usually performs a one-hand touch screen operation with the touch screen directed in the vertical direction. In this regard, even in a smartphone game, it is preferable to advance the game by fixing the touch screen in the vertical direction so that the user can perform one-hand operation. When the touch screen is fixed in the horizontal direction, user operations with both hands are required as in a normal game terminal.

  When fixing the touch screen in the vertical direction in a smartphone game, the left and right display range is inevitably narrowed. For example, in a battle game in a virtual space, the player object moves fast in the horizontal direction If there are obstacles such as enemy objects or traps in the horizontal direction, these will suddenly appear on the screen. This constitutes a surprise that is unfavorable for the user in the progress of the game. That is, regarding the view display from the virtual camera, there is a risk of giving the user a lot of discomfort when compared to when the display is fixed in the horizontal direction.

JP2013-81621A

  An object of the present invention is to provide camera work control of a virtual camera that improves usability for a user without impairing the sense of reality of the game. In particular, when a user operates a touch screen in a smartphone game, the restriction of a screen display area in a specific direction is eliminated through effective camera work control of a virtual camera.

  In order to solve the above problems, the present invention provides a game program for causing a computer to control a virtual camera provided in a virtual space. That is, an image generation unit that generates a game image to be displayed on the display unit based on a moving operation unit that moves a player object placed in the virtual space based on a user command from the input unit, and a view area from the virtual camera The computer functions as a specific detection unit that identifies the position of the enemy object, detects an enemy object within a detection region associated with the center point of view, and a control unit that optimizes the view region from the virtual camera. Then, according to the movement of the player object, the control unit moves the visual field center point so that the position of the player object is included in the visual field area, causes the virtual camera to track the player object, When the position is within the detection area and the position of the identified enemy object is not within the visual field area, the visual field central point is maintained at a fixed position, and consists of a part of the sphere surface centered on the visual field central point. The virtual camera is moved within the movable range, and each position of the player object and the enemy object is included in the visual field area.

  In the present invention, the control unit further moves the moved virtual camera in a reverse direction with respect to the visual center point in the virtual space. In addition, the movable range is a region determined by a polar angle range and an azimuth angle range of a given angle with respect to a visual field center point on a horizontal plane. In particular, the azimuth angle range is a range of ± 10 degrees or less.

  According to the present invention, it is possible to automatically detect an enemy object that is not displayed on the user screen but exists nearby, and to provide camera work for displaying an image of the enemy object together with the player object. Thereby, the presence of such an enemy object can be presented on the user screen at an early stage, and usability can be improved without giving the user a sense of discomfort in the progress of the game.

  By controlling the rotation of the virtual camera on the spherical plane, the position of the enemy object can be displayed close to the center line of the screen with respect to the player object, and peripheral information about the enemy object can be displayed while the game is in progress. Can be displayed on the screen more. Thereby, the restriction | limiting of the screen display area | region of the specific direction like the horizontal direction in the smart phone screen fixed to the vertical direction can be eliminated, and a user action (attack etc.) with respect to an enemy object can be promoted effectively.

  In the present invention, the reason why the rotation angle of the virtual camera is limited to “10 degrees or less” as described above is the result of considering screen sickness (video sickness) by the user. That is, by limiting the rotation of the virtual camera to 10 degrees, it is possible to provide effective usability that does not cause screen sickness without impairing the sense of presence for the user.

FIG. 1 is an example of a game image that is actually displayed according to an embodiment of the present invention. FIG. 2 is a schematic diagram of a game system according to an embodiment of the present invention. FIG. 3 is a functional block diagram of a game server included in the game system according to the embodiment of the present invention. FIG. 4 is a schematic three-dimensional view showing an arrangement example of the virtual camera and the player object in the virtual space realized by using the game system of FIG. 2 according to the embodiment of the present invention. FIG. 5 is a schematic plan view illustrating an arrangement example of the virtual camera and the player object in the virtual space according to the embodiment of the present invention. FIG. 6 is a schematic flowchart of camera work control processing of the virtual camera implemented based on the game system of FIG. FIG. 7 is a schematic three-dimensional view showing a specific example of camera work in the virtual space regarding the automatic tracking of the player object shown in FIG. FIG. 8 is a flowchart of the process of optimizing the visual field area of the virtual camera among the processes shown in FIG. FIG. 9 is a schematic plan view showing a virtual camera and each object arrangement example in the virtual space realized by using the game system of FIG. 2 according to the embodiment of the present invention. FIG. 10 is a schematic three-dimensional view showing a specific example of the movable range of the virtual camera in the virtual space with respect to the optimization processing shown in FIG. FIG. 11 is a schematic plan view illustrating a specific example of the movable range of the virtual camera in the virtual space with respect to the optimization processing illustrated in FIG. 8. FIG. 12 is a schematic plan view showing rotation in the azimuth direction of the virtual camera corresponding to FIGS. 9 and 11 (2). FIG. 13 is a schematic three-dimensional view showing a specific example of the movement path in the movable range of the virtual camera shown in FIG. FIG. 14 is a schematic side view showing a specific example of the camera work of the virtual camera in the virtual space regarding the optimization processing shown in FIG. FIG. 15 is an example of an actual game image in which the optimization process shown in FIG. 8 is implemented.

  Hereinafter, a game program for causing a computer to control the operation of a virtual camera provided in a virtual space according to an embodiment of the present invention and a game system related thereto will be described with reference to the drawings.

Outline of Game Content First, referring to FIG. 1, an outline of an example of game content in which the game program of the present invention is implemented is shown. Here, as in the display screen 11 of FIG. 1, an action RPG is assumed as a game content mode. The player object 1 displayed at the center of the display screen 11 can move in the game virtual space along the horizontal plane in principle according to a user command through the terminal input unit. A virtual camera 3 is disposed behind and above the player object 1 (not shown), and a game image 6 is generated based on the field of view from the virtual camera 3 and displayed on the display screen 11.

  In addition, a circular enemy object detection area 7 centered on the player object 1 is arbitrarily displayed as a game image 6 on the display screen 11 (described later). Further, on the upper right side of the display screen 11, wide area plane information 6 a in the virtual space centered on the player object 1 is displayed as a part of the game image 6. The wide area plane information 6 a shows a plan view from the vertical direction with the player object 1 as the center, together with the orientation and the movable range of the player object 1, and is wider than the view area from the virtual camera 3. Can be presented to the user.

  In addition, on the upper left side of the display screen 11, as the characteristic information 6b of the player object 1 in the action RPG, the character type, character image, strength (hit points (HP) and save points (SP)), etc. are also game images. 6 is displayed. The player object 1 shown in FIG. 1 is shown to have a sword as a weapon, but in the action RPG, the strength of the player object 1 is changed by such a weapon. Since it is configured, weapon information and the like are preferably included in the characteristic information.

  In FIG. 1, the enemy object 2 (not shown, which will be described later in FIG. 9 and later) does not yet exist in the peripheral area, and therefore is not displayed on the game image 6 as the wide area plane information 6 a. However, when the player object 1 is moved by the user command and / or the enemy object 2 is moved by the given program command, when these objects come close to each other, the enemy object 2 is first present in the wide area plane. It is displayed in the information 6a and then displayed in the game image 6. In this case, the user can move the player object 1 to the enemy object 2 by further moving the player object 1 to the position of the enemy object 2 by a user operation.

  The content of the game program of the present invention is not limited to such an action RPG. A game in which a plurality of objects can exist simultaneously in the virtual space and all or part of these objects can be specified on the display screen based on the field of view from the virtual camera 3 and can be moved in the space by the user. In addition to the action RPG, the present invention can be applied to, for example, a shooting game.

Outline of Game System With reference to FIG. 2 and FIG. 3, a basic configuration of the game system 100 in the embodiment of the present invention will be described. 2 and 3 are block diagrams functionally showing only main components of the game system. As shown in FIG. 2, the game system 100 includes a user terminal 10 and a game server 50.

  Referring to FIG. 2, the user terminal 10 includes a terminal display unit 11, a terminal input unit 12, a terminal processing unit 13, a terminal communication unit 14, a terminal storage unit 15, and the like, and interacts with the game server 50 via a network. Is possible.

  The terminal display unit 11 temporarily stores the game screen generated by the game server 50 and received by the terminal communication unit 14 in the terminal storage unit 15 and displays the game screen. The terminal display unit 11 is realized by, for example, a liquid crystal display (LCD). The terminal input unit 12 is used to input commands for causing the player object 1 to perform various operations (“movement”, “fight against an enemy object”), and is realized by, for example, a touch panel or operation buttons. Based on the input command from the terminal input unit 12, the terminal processing unit 13 creates a user command, communicates with the game server 50 through the terminal communication unit 14 and the network, and causes the game server 50 to execute the program. . The user terminal 10 is preferably a smartphone, and in this respect, the terminal input unit 12 is preferably a touch panel.

  On the game server 50 side, upon receiving a user command from the user terminal 10 via the communication unit 53, the processing device 52 processes the game program 510 having various program modules stored in the storage unit 51. The processing device 52 is executed using the business data 520 and returned to the user terminal 10.

  In relation to FIG. 2, all or part of the functions as the information processing apparatus may be configured to be performed by the user terminal 10. In this case, the user terminal 10 alone constitutes the information processing apparatus, or the user terminal 10 and the game server 50 constitute the information processing apparatus.

  Further, exemplary functions of the game server 50 will be described with reference to FIGS. 2 and 3. Various program modules 510 will be described. The player / object movement operation module 511 implements a function as the player / object movement operation unit 61 that moves the player / object 1 in the virtual space in accordance with a user command. The game image generation module 512 implements a function as the game image generation unit 62 that generates the game image 6 based on the field of view from the virtual camera 3. The enemy object identification module 513 implements a function as an enemy object identification detection unit 63 that identifies the position of the enemy object in the virtual space, and in particular detects whether the enemy object 2 has entered the enemy object detection area 7. . The virtual camera control module 514 implements a function as the virtual camera control unit 64 that controls the camera work operation of the virtual camera 3 and optimizes the view area (camera work processing will be described later).

  The processing data 520 will be described. The virtual space data 521 mainly includes data that defines the virtual space. In the example of FIG. 1, the waterside area 6A, the sandbox area 6B, the rock wall area 6C, and the grassland area 6D are defined as three-dimensional space data in the virtual space. Define to be movable along. The player object data 522 includes the enemy object detection area 7 associated with the position coordinates, orientation, position information of the player object 1 in the virtual space, the shape and size of the enemy object detection area 7, player characteristic information, and the like. including. The enemy object data 523 includes the number of enemy objects, respective position coordinates, actions, enemy object characteristic information, and the like. The virtual camera data 524 includes various data for defining the coordinate position, azimuth, viewing angle, and other view areas from the virtual camera 3 of the virtual camera.

In-game three-dimensional virtual space With reference to FIGS. 4 and 5 , a three-dimensional virtual space in which the player object 1 and the virtual camera 3 are arranged will be described.

  As shown in FIG. 4, the player object 1 and the virtual camera 3 are arranged in the XYZ-3D space. The virtual camera 3 is configured to set a view area 8 and photograph the player object 1 from behind and above the player object 1. More specifically, the virtual camera control module 514 sets the visual field region 8 so that the center is the visual field center point 5, and aligns the visual field central point 5 so that it is always the central point in the game image 6. To do. The visual field center point is a point where the central axis extending from the virtual camera intersects the horizontal plane.

  In the example of FIG. 4, the player object 1 is arranged on the visual field center point 5, and in this case, the player object 1 is displayed at the center (broken line intersection) in the game image 6 as shown in FIG. The As the player object 1 moves in the virtual space, the visual field region 8 is adjusted by moving the visual field center point by controlling the position and orientation of the virtual camera 3.

  In FIG. 4, the enemy object detection area 7 is shown as a circular area around the visual field center point 5 on the XY plane. As shown in the figure, the visual field area 8 and the enemy object detection area 7 of the virtual camera 3 are set independently in the virtual space, and therefore there may be areas where they do not overlap. That is, there may be one or more blind areas 9 and 9 ′ that are included in the enemy object detection area 7 but are not included in the view area 8. When the enemy object 2 is present in the blind areas 9 and 9 ′, the enemy object 2 is not reflected in the game screen 6 even though the enemy object 2 is sufficiently close to the player object 1. Means that.

Similarly, FIG. 5 is an XY plan view of FIG. 4 viewed from the Z-axis upper direction. The fixed viewing angle θ f of the virtual camera facing the visual field center point 5 is basically determined based on the aspect ratio of the display screen 11. As described above, when the user uses the touch screen fixed in the vertical direction as a smartphone game, the viewing angle θ f becomes very narrow, and thus a sufficient area display in the horizontal direction cannot be secured. There is a problem. That is, when a smartphone is applied as the user terminal 10, the area of the blind regions 9, 9 ′ is very large in the horizontal direction of the screen. This causes a situation in which the enemy object 2 suddenly appears from the lateral direction in an unfavorable manner during the progress of the game, and makes the user uncomfortable. Therefore, it is necessary to optimize the display area of the virtual camera 3 in order to solve the above problem.

  In FIGS. 4 and 5, the enemy object detection area 7 is shown as a circular area on the XY plane as an example. However, the present invention is not limited to this, and can have any shape and size. In particular, the detection area 7 is preferably determined based on characteristic information such as the strength of the player object 1 and the type of weapon possessed. For example, in the case where the enemy object detection area 7 is a circle area as described above, if the radius of the circle is determined based on the strength of the weapon possessed by the player object 1, the fun as a competitive game increases. This is because the stronger the weapon is held, the more the user's need to match the player object 1 with more enemy objects 2 can be satisfied.

The camera work operation control process of the virtual camera The camera work operation control process of the virtual camera 3 using the game program of the present invention and the related game system will be described in detail.

  FIG. 6 is a flowchart showing an outline of the camera work control process of the virtual camera 3. First, in step S10, the player / object moving operation module 511 moves the player / object 1 arranged in the virtual space based on a user command from the terminal input unit 12. In step S20, the visual center point 5 is moved by the virtual camera control module 514 so that the position of the player object 1 is included in the visual field area 8 in accordance with the movement of the player object 1, and the virtual camera 3 is moved to the player.・ Automatically track object 1.

  Here, an aspect of automatic tracking will be described with reference to FIG. An example of automatic tracking of the virtual camera 3 when the player object 1 moves horizontally in the virtual space from the lower left to the upper right in FIG. 7 along the movement path 4 on the XY plane is performed as follows.

  As described above, the visual field center point 5 is a point where the central axis extending from the virtual camera intersects the horizontal plane, and is aligned so as to always be the center point in the game image 6. At the start of movement of the player object 1, the virtual camera 3 places the player object 1 on the visual field center point 5. That is, it is captured so as to be the center on the game screen. Next, when the player object 1 is moving along the movement path 4, the virtual camera 3 also moves along the movement path 4 while maintaining a certain height in the Z direction (dotted arrow in FIG. 7). The view center point 5 is moved. At this time, the visual field center point 5 ′ may be lowered backward by a predetermined distance along the movement path 4 with respect to the position of the player object 1. That is, the player object 1 is arranged in front of the visual field center point 5 ′ along the movement path 4. Finally, when the player object 1 finishes moving, the player object 1 is placed again on the visual field center point 5 ″ as in the case of starting movement. In this way, the player object 1 is positioned on the visual field center point 5, 5 ″ when stationary, and is arranged so as to leave a certain distance from the visual field center point 5 ′ during movement, that is, the visual field center point. It is understood that the registration is performed in relation to the position.

  Returning to FIG. 6, the enemy object specifying module 513 specifies the position of the enemy object 2 in the vicinity in step S <b> 30, and the enemy object 2 is located in the enemy object detection area 7 associated with the visual field center point 5. In this case, in step S40, it is determined whether the position is not within the field of view 8 from the virtual camera 3 (that is, within the blind region 9 (9 ′) outside the field of view 8). To do.

  If it is determined that the enemy object 2 is located in the blind area 9, then the process proceeds to step S50, where the virtual camera control module 514 keeps the visual field center point 5 in the visual field area 8 at a fixed position, while maintaining the visual field center point 5 The visual field 8 is optimized by adjusting the position of the virtual camera with respect to (see below). In step S <b> 60, the game image generation module 512 generates a game image based on the field of view 8 from the virtual camera 3 so that it can be displayed on the terminal display unit 11.

  The optimization process of the visual field area 8 in step S50 will be described in more detail with reference to FIG. FIG. 8 is a flowchart showing the details of the optimization process in step S50 performed by the camera control module 514. In contrast to the state in which the enemy object 2 is not located in the enemy object detection area 7 or the visual field area 8 as shown in FIG. 9 (1), FIG. It is in a state of being out of 8 (that is, located in the blind area 9). When the enemy object 2 enters the blind area as shown in FIG. 9 (2), camera work control by the optimization process is performed.

Prior to the first step 51, the virtual camera data 524 (particularly the virtual camera initial position L1 (x 1 , y 1 , z 1 )) at the time of starting the optimization process is stored in the storage unit 51. Good. This is because after the optimization process is completed and the user causes the player object 1 to perform a predetermined action (for example, “match”) on the enemy object 2, the stored virtual camera data 524 is used. This is because it is preferable for the progress of the game to rearrange the virtual camera 3 at the original position.

In step S51, first, a movable range 70 consisting of a part of the sphere surface centered on the visual field center point 5 is determined. The movable range 70 will be described with reference to FIGS. 10 and 11. As shown in FIG. 10, the visual field center point 5 is the coordinate origin (0, 0, 0), and the spherical coordinates are defined as polar coordinates in the three-dimensional space around the visual field center point 5 on the horizontal plane, and the virtual camera initial position L1 Is defined as L1 (r 1 , θ 1 , φ 1 ). Then, a movable range 70 is defined as a range in which the virtual camera 3 can move. At this time, the position of the moving virtual camera 3 can be expressed in polar coordinates as L (r 1 , θ c , φ c ) (in addition to the description in the XYZ coordinate system). In general, θ c is referred to as a polar angle, and φ c is referred to as an azimuth angle, and the movable range 70 of the virtual camera 3 is shaded in FIG. 10 as having a polar angle range and an azimuth angle range. Region 70 is configured.

11 (1) shows the XZ plane viewed from the Y-axis direction, and FIG. 11 (2) shows the XY plane viewed from the Z-axis direction. As shown in FIG. 11, the movable range of the virtual camera 3 is shown. It is understood that 70 has a preset polar angle θ and azimuth angle φ with respect to the virtual camera initial position L1. That is, the polar angle range of the movable range 70 is determined as θ 1 ≦ θ c ≦ θ 1 + θ, and the azimuth angle range is determined as φ 1 −φ / 2 <= φ c ≦ φ 1 + φ / 2.

As described above, the movable range 70 is configured as a part of the sphere surface, and the distance between the visual field center point 5 and the virtual camera 3 is kept constant (r 1 ), so that the user can feel a sense of reality. . By configuring the movable range 70 so as to have a polar angle range and an azimuth angle range, it is only necessary to use only the polar coordinate parameters of θ and φ when the moving path of the virtual camera 3 is designed and implemented. And processing efficiency can be improved.

As described above with reference to FIG. 5, in view of the problem that the viewing angle θ f is very narrow and the lateral display of the area is not sufficiently secured, the virtual camera 3 is set to the polar angle. It should be noted that it is necessary to move not only in the direction but also in the azimuth direction by rotation.

  Here, it has been confirmed by experiments of the present inventor that the virtual camera rotation angle in the azimuth direction is preferably 10 degrees or less. That is, if the player object 1 is allowed to move around the enemy object 2 without providing an angle limit, the visual field area rotates accordingly. This is in a state close to that the user is riding a coffee cup at an amusement park, so that it is easy for the user to experience screen sickness (video sickness), which is not appropriate. Also, if the angle limit is set to about 30 degrees, some users will be evaluated as “more realistic”, but this angle is also inappropriate for users who are prone to screen sickness. did. On the other hand, if the angle limit is 10 degrees or less, there is almost no screen sickness for the user, so the rotation angle should be set to 10 degrees or less.

  In FIG. 10 and subsequent figures, it is assumed that the player object 1 is moving, and note that the position of the player object 1 is shifted from the visual center point 5 as described with reference to FIG. deep.

Returning to FIG. 8, in step S52, the movement destination position L2 (r 1 , θ 2 , φ 2 ) of the virtual camera 3 in the optimization process is determined. In particular, the movement of the virtual camera in the azimuth direction requires that the distance from the position of the enemy object 2 to the position of the virtual camera 3 must be shortened in order to include the enemy object 2 in the field of view 8 of the virtual camera. It is necessary to pay attention to. More specifically, as can be seen with reference to FIG. 12 (corresponding to FIG. 9), the virtual camera 3 is shorter than the distance d1 when the distance (d2) to the enemy object 2 is the initial position L1. Must be moved in the direction (ie, the counterclockwise direction indicated by the arrow) (d1> d2).

Referring to FIG. 8 again, when the movement destination position L2 is determined in step S52, in step S53, the virtual camera passing through the movable range 70 between the initial position L1 of the virtual camera and the position L2 after the movement is determined. The movement path 72 is determined. FIG. 13 shows an example of the moving path 72 of this virtual camera. Here, as L2 (r 1 , θ 2 , φ 2 ) = (r 1 , θ 1 + θ, φ 1 −φ / 2), the movement path 72 between L 1 and L 2 is indicated by an arrow. The movement path 72 should pass through the movable range 70 and connect the positions L1 and L2 with the shortest distance.

  When the movement path 72 of the virtual camera 3 is determined through a series of processes of steps S51 to S53 in FIG. 8, the posture is adjusted so as to keep the visual field center point at a fixed position along the movement path 72 in step S54. Meanwhile, the virtual camera 3 is actually moved. As a result of this processing, when the enemy object 2 is included in the field of view 8 of the virtual camera (step S55), the optimization processing ends. On the other hand, if the enemy object 2 is still located in the blind area 9 outside the field of view, the process of step S56 is further performed.

In step S56 of FIG. 8, as shown also in the side view of FIG. 14, the virtual position moved from the initial position L1 (r 1 , θ 1 , φ 1 ) to the position of L2 (r 1 , θ 2 , φ 2 ). The camera 3 is further moved in the opposite direction with respect to the visual field center point. At this time, it is preferable that the polar angle θ 2 and the azimuth angle φ 2 are constant, and the linear movement is performed so as to increase the distance from the visual field center point 5. As a result, the position of the virtual camera 3 is finally L3 (r 2 , θ 2 , φ 2 ). By moving the virtual camera 2 by a predetermined distance so as to be separated from the visual field center point 5, the visual field area 8 of the virtual camera can be widened, and the enemy object 2 in the blind area 9 can be expanded into a widened visual field area. May be included.

  Finally, an example of the actual game image 6 when the series of optimization processes shown in FIG. 8 is performed is shown in FIG. In FIG. 15A, the wide area plane information 6 a suggests the existence of a plurality of enemy objects 2, and the enemy object 2 is actually located in the enemy object detection area 7. However, since the enemy object 2 is in the blind area 9, the user cannot confirm its presence from the visual field area 8 (see also FIG. 9 (2)).

  Next, in FIGS. 15 (2) and (3), as a result of the optimization process of FIG. 8, the enemy object 2 is included in the visual field area 8 and can be captured by the user on the game screen 6. In particular, comparing FIGS. 15 (2) and (3), the enemy object 2 approaches the broken line a at the center of the screen due to the movement of the virtual camera position in the azimuth direction. This can be understood from the fact that the dotted line b and the solid line c are shifted.

  As described above, when the touch screen is fixed in the vertical direction as a smartphone game by performing camera work control of the virtual camera 3 so that the enemy object 2 located in the blind area 9 is included in the view area in this manner. Since the horizontal display area cannot be sufficiently secured and the blind area becomes larger than necessary, it is possible to solve the problem of allowing an unexpected hit from an enemy object in the blind area. In particular, by moving the virtual camera in the azimuth direction, it is possible to achieve an improvement in usability that gives the user a higher sense of presence and makes it easier to take action on the enemy object 2.

  In addition, throughout this specification, especially when the touch screen is fixed in the vertical direction (portrait holding) as a smartphone game, it has been explained that the display of the area in the horizontal direction becomes a problem. do not have to. Even in the case of fixing in the horizontal direction (horizontal holding), the display of the area in the vertical direction is also a problem, so that it is possible to prevent the enemy object from hitting the vertical direction in the same manner as described above. It should be noted that there is an effect.

  As described above, the game program and the related game system for causing the computer to control the operation of the virtual camera provided in the virtual space according to the embodiment of the present invention have been described. The above-described embodiments are merely examples for facilitating understanding of the present invention, and are not intended to limit the present invention. The present invention can be changed and improved without departing from the gist thereof, and it is needless to say that the present invention includes equivalents thereof.

DESCRIPTION OF SYMBOLS 1 Player object 2 Enemy object 3 Virtual camera 4 Movement path 5 Visibility center point 6 Game image 7 Enemy object detection area 8 Visibility area 9 Blind area 10 User terminal 11 Display screen 50 Game server 51 Memory | storage part 61 Player / object movement operation Unit 62 Game image generation unit 63 Enemy object identification detection unit 64 Virtual camera control unit 70 Within virtual camera movable range 71 Virtual camera movement path 510 Game program 520 Processing data

Claims (8)

  1. Defining a virtual space including a virtual viewpoint, a first object, and a second object;
    Moving the first object based on a user operation;
    Defining a first area associated with the gaze point of the virtual viewpoint or the first object;
    A first control step of controlling at least one of the position and orientation of the virtual viewpoint such that the field of view from the virtual viewpoint captures the first object and the virtual viewpoint is located outside the first region;
    When the second object is included in the first area but not included in the field of view, both the first object and the second object are included in the field of view, and the virtual viewpoint is outside the first area. Specific visual field control that automatically controls at least one of the position and orientation of the virtual viewpoint so that the second object is included in the first region even if the second object is included in the first region. If included, a second control step that does not perform the specific view control ;
    Displaying a view image corresponding to the view on a display unit.
  2.   The program according to claim 1, wherein the first area is a circular area.
  3. In the specific view control, the second control step includes the virtual view so that both the first object and the second object are included in the view and the virtual viewpoint is located outside the first region. The program according to claim 1, wherein at least one of the position and orientation of the virtual viewpoint is automatically controlled without changing a distance from the gazing point of the viewpoint to the virtual viewpoint.
  4.   In the second control step, when both the first object and the second object cannot be included in the field of view without changing the distance, both the first object and the second object are included in the field of view. The program according to claim 3, wherein the position of the virtual viewpoint is moved away from the gazing point so that the virtual viewpoint is located outside the first region.
  5. In the specific view control, the second control step includes the virtual view so that both the first object and the second object are included in the view and the virtual viewpoint is located outside the first region. The program according to any one of claims 1 to 4, wherein at least one of the position and orientation of the virtual viewpoint is automatically controlled while maintaining the position of the gazing point of the viewpoint.
  6. 6. The method according to claim 1, wherein the first control step further automatically controls at least one of a position and a posture of the virtual viewpoint so that the first object is positioned on the gazing point of the virtual viewpoint. The program according to item 1.
  7. Defining a virtual space including a virtual viewpoint, a first object, and a second object;
    Moving the first object based on a user operation;
    Defining a first area associated with the gaze point of the virtual viewpoint or the first object;
    A first control step of controlling at least one of the position and orientation of the virtual viewpoint such that the field of view from the virtual viewpoint captures the first object and the virtual viewpoint is located outside the first region;
    When the second object is included in the first area but not included in the field of view, both the first object and the second object are included in the field of view, and the virtual viewpoint is outside the first area. Specific visual field control that automatically controls at least one of the position and orientation of the virtual viewpoint so that the second object is included in the first region even if the second object is included in the first region. If included, a second control step that does not perform the specific view control ;
    Displaying a visual field image corresponding to the visual field on a display unit.
  8. With a processor,
    The processor is
    Defining a virtual space including a virtual viewpoint, a first object, and a second object;
    Moving the first object based on a user operation;
    Defining a first area associated with the gazing point of the virtual viewpoint or the first object;
    Controlling at least one of the position and orientation of the virtual viewpoint so that the field of view from the virtual viewpoint captures the first object and the virtual viewpoint is located outside the first region;
    When the second object is included in the first area but not included in the field of view, both the first object and the second object are included in the field of view, and the virtual viewpoint is outside the first area. Specific visual field control that automatically controls at least one of the position and orientation of the virtual viewpoint so that the second object is included in the first region even if the second object is included in the first region. If included, the specific view control is not performed ,
    An information processing apparatus that displays a visual field image corresponding to the visual field on a display unit.
JP2015105005A 2015-05-22 2015-05-22 Game program Active JP6567325B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2015105005A JP6567325B2 (en) 2015-05-22 2015-05-22 Game program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2015105005A JP6567325B2 (en) 2015-05-22 2015-05-22 Game program

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
JP2014105949 Division 2014-05-22

Publications (2)

Publication Number Publication Date
JP2015221211A JP2015221211A (en) 2015-12-10
JP6567325B2 true JP6567325B2 (en) 2019-08-28

Family

ID=54784603

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2015105005A Active JP6567325B2 (en) 2015-05-22 2015-05-22 Game program

Country Status (1)

Country Link
JP (1) JP6567325B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018202053A (en) * 2017-06-08 2018-12-27 株式会社カプコン Game program, game device and server device

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100276549B1 (en) * 1995-12-07 2000-12-15 이리마지리 쇼우이치로 Image generation apparatus, image generation method, game machine using the method
JP3145059B2 (en) * 1997-06-13 2001-03-12 株式会社ナムコ Information storage medium and image generation device
JPH11137842A (en) * 1997-09-04 1999-05-25 Sega Enterp Ltd Image processing device
JP2001269482A (en) * 2000-03-24 2001-10-02 Konami Computer Entertainment Japan Inc Game system, computer-readable recording medium in which program for game is stored and image displaying method
JP3833445B2 (en) * 2000-06-14 2006-10-11 株式会社バンダイナムコゲームス Game apparatus and information storage medium
JP3927821B2 (en) * 2002-01-25 2007-06-13 株式会社バンダイナムコゲームス Information storage medium and a game device
JP4180065B2 (en) * 2005-05-13 2008-11-12 株式会社スクウェア・エニックス Image generation method, image generation apparatus, and image generation program
JP3954629B2 (en) * 2006-01-20 2007-08-08 株式会社コナミデジタルエンタテインメント Game device, a game device control method, and program
JP4459993B2 (en) * 2007-10-03 2010-04-28 株式会社スクウェア・エニックス Image generating apparatus, image generating program, and image generating program recording medium
JP5250237B2 (en) * 2007-10-23 2013-07-31 株式会社カプコン Program and game system
JP5507893B2 (en) * 2009-05-29 2014-05-28 株式会社バンダイナムコゲームス Program, information storage medium, and image generation system
JP5236674B2 (en) * 2010-03-04 2013-07-17 株式会社コナミデジタルエンタテインメント Game device, game device control method, and program
JP6011280B2 (en) * 2012-11-29 2016-10-19 株式会社ノーリツ Hot water storage water heater

Also Published As

Publication number Publication date
JP2015221211A (en) 2015-12-10

Similar Documents

Publication Publication Date Title
JP6114472B2 (en) Image rendering in response to user movement on a head-mounted display
US6890262B2 (en) Video game apparatus, method and recording medium storing program for controlling viewpoint movement of simulated camera in video game
US9984505B2 (en) Display of text information on a head-mounted display
US20170209786A1 (en) Using a portable device to interact with a virtual space
CN1093299C (en) Image processing apapratus and method
EP2295122B1 (en) Game apparatus, storage medium storing a American Football game program, and American Football game controlling method
EP2953099A1 (en) Information processing device, terminal device, information processing method, and programme
CN101961555B (en) Video game machine, gaming image display control method and display mode switching control method
US9360933B2 (en) Virtual links between different displays to present a single virtual object
US20060040738A1 (en) Game image display control program, game device, and recording medium
Oda et al. Developing an augmented reality racing game
US8556695B2 (en) Information storage medium, image generation device, and image generation method
US20140218361A1 (en) Information processing device, client device, information processing method, and program
US8556694B2 (en) Network game system, a network game terminal, a method of displaying a game screen, a computer program product and a storage medium
US9159165B2 (en) Position-dependent gaming, 3-D controller, and handheld as a remote
JP4335160B2 (en) Collision judgment program and collision judgment device
JP5148660B2 (en) Program, information storage medium, and image generation system
EP2475178A2 (en) Computer-readable storage medium having information processing program stored therein, information processing method, information processing apparatus, and information processing system
CN103390287B (en) Device and method for augmented reality
US9675876B2 (en) Touch screen inputs for a video game system
JP3833445B2 (en) Game apparatus and information storage medium
CN102222338A (en) Automatic depth camera alignment
US9123142B2 (en) Adjusting content display orientation on a screen based on user orientation
JP5800473B2 (en) Information processing program, information processing apparatus, information processing system, and information processing method
US7922584B2 (en) Image generation method and information storage medium with program for video game in which operation of the controller beyond a predetermined angle causes a character to attack

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20170522

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20180521

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20180720

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20180813

A601 Written request for extension of time

Free format text: JAPANESE INTERMEDIATE CODE: A601

Effective date: 20181011

RD03 Notification of appointment of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7423

Effective date: 20181120

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20181120

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20181225

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20190325

A911 Transfer of reconsideration by examiner before appeal (zenchi)

Free format text: JAPANESE INTERMEDIATE CODE: A911

Effective date: 20190402

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20190702

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20190731

R150 Certificate of patent or registration of utility model

Ref document number: 6567325

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150