CN117695648A - Virtual character movement and visual angle control method, device, electronic equipment and medium - Google Patents

Virtual character movement and visual angle control method, device, electronic equipment and medium Download PDF

Info

Publication number
CN117695648A
CN117695648A CN202311727255.3A CN202311727255A CN117695648A CN 117695648 A CN117695648 A CN 117695648A CN 202311727255 A CN202311727255 A CN 202311727255A CN 117695648 A CN117695648 A CN 117695648A
Authority
CN
China
Prior art keywords
trigger point
area
character
virtual
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311727255.3A
Other languages
Chinese (zh)
Inventor
刘奇屹
张秉炜
曾江南
叶琳骥
谢裕香
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ai Avatar Technology Beijing Co ltd
Original Assignee
Ai Avatar Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ai Avatar Technology Beijing Co ltd filed Critical Ai Avatar Technology Beijing Co ltd
Priority to CN202311727255.3A priority Critical patent/CN117695648A/en
Publication of CN117695648A publication Critical patent/CN117695648A/en
Pending legal-status Critical Current

Links

Abstract

The present disclosure provides a method, an apparatus, an electronic device, and a medium for movement and viewing angle control of a virtual character, including: acquiring triggering operation data generated on a display screen of the interactive device by a user, wherein the triggering operation data comprises: a current trigger point; detecting the region belonging relation between the current trigger point and the character moving region and the first visual angle rotating region; if the current trigger point is detected, the region belonging relation between the character moving region and the first visual angle rotating region is a preset belonging relation, the visual angle rotation of the virtual character is controlled, and the display pose of the virtual character is controlled. Therefore, through fusing the visual angle rotation region character movement area, the user can control character movement and visual angle rotation simultaneously without separating from the screen by one finger, and the user can change hands to continue experience when holding for a long time, so that the user can experience the virtual character visual angle immersively, and the user experience sense is effectively improved.

Description

Virtual character movement and visual angle control method, device, electronic equipment and medium
Technical Field
The embodiment of the disclosure relates to the technical field of virtual reality, in particular to a method, a device, electronic equipment and a medium for controlling movement and visual angle of a virtual character.
Background
The meta universe is a virtual world mapped and interacted with the real world, and has a digital living space of a novel social system, and the essence of the meta universe is that the virtualization and the digitizing processes of the real world need to be greatly improved in content production, economic systems, user experience, physical world content and the like.
In the related art, the smart device may assist the user in achieving the experience of the metaspace. The intelligent device adopts a transversely arranged touch screen, and a user needs to hold the intelligent device by hands, such as a left thumb and a right thumb, respectively control independent virtual rockers, so that the user controls movement and visual angle rotation of the virtual character through the hands, and virtual experience of metaspace is performed.
However, in the above manner, the user needs to hold the device with two hands, and long-time holding operation can lead to fatigue of the two arms of the user, which affects the user experience.
Disclosure of Invention
Embodiments described herein provide a virtual character movement and viewing angle control method, apparatus, electronic device, and medium, which overcome the above-described problems.
According to the first aspect, according to the disclosure, a movement and view angle control method of a virtual character is provided, and the method is applied to an interactive device, wherein a first view angle rotation area and a character movement area are arranged on a display screen of the interactive device, the first view angle rotation area is used for describing an automatic view angle rotation area, the first view angle rotation area is overlapped with a partial area of the character movement area, and a display mode of the virtual character in the display screen of the interactive device is vertical screen display;
The method comprises the following steps:
acquiring trigger operation data generated by a user on a display screen of the interactive device, wherein the trigger operation data comprises: a current trigger point;
detecting the region belonging relation between the current trigger point and the character moving region and the first visual angle rotating region, wherein the region belonging relation is used for describing a display region where the current trigger point is located, and the display region comprises: the character movement area and/or the first view rotation area;
if the current trigger point is detected, the region affiliated relation between the character moving region and the first visual angle rotating region is a preset affiliated relation, the visual angle rotation of the virtual character is controlled, and the display pose of the virtual character is controlled, wherein the display pose comprises: position coordinates and orientation data;
the preset belonging relation is used for describing that the current trigger point is located in an overlapping area of the character moving area and the first visual angle rotating area.
According to a second aspect of the present disclosure, there is provided a movement and view angle control device for a virtual character, which is applied to an interactive device, where a display screen of the interactive device is provided with a first view angle rotation area and a character movement area, the first view angle rotation area is used to describe an automatic view angle rotation area, the first view angle rotation area overlaps with a partial area of the character movement area, and a display manner of the virtual character in the display screen of the interactive device is vertical screen display;
The device comprises:
the acquisition module is used for acquiring trigger operation data generated on a display screen of the interactive equipment by a user, wherein the trigger operation data comprises: a current trigger point;
the detection module is configured to detect a region-to-region relationship between the current trigger point and the character movement region and the first perspective rotation region, where the region-to-region relationship is used to describe a display region where the current trigger point is located, and the display region includes: the character movement area and/or the first view rotation area;
the control module is configured to control, if the current trigger point is detected and the region relationship between the character movement region and the first view angle rotation region is a preset relationship, the view angle rotation of the virtual character and the display pose of the virtual character, where the display pose includes: position coordinates and orientation data;
the preset belonging relation is used for describing that the current trigger point is located in an overlapping area of the character moving area and the first visual angle rotating area.
In a third aspect, there is provided an electronic device comprising a memory in which a computer program is stored and a processor which when executing the computer program performs the steps of the method of movement and perspective control of a virtual character as in any of the above embodiments.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of movement and perspective control of a virtual character as in any of the above embodiments.
The method for moving and controlling the visual angle of the virtual character is applied to the interactive equipment, a first visual angle rotating area and a character moving area are arranged on a display screen of the interactive equipment, the first visual angle rotating area is used for describing an automatic visual angle rotating area, the first visual angle rotating area is overlapped with a part of area of the character moving area, and the display mode of the virtual character in the display screen of the interactive equipment is vertical screen display; the method comprises the steps of obtaining triggering operation data generated by a user on a display screen of the interactive device, wherein the triggering operation data comprise: a current trigger point; detecting the region belonging relation between the current trigger point and the character moving region and the first visual angle rotating region, wherein the region belonging relation is used for describing a display region where the current trigger point is located, and the display region comprises: a character movement area and/or a first view rotation area; if the current trigger point is detected, the region belonging relation between the character moving region and the first visual angle rotating region is a preset belonging relation, the visual angle rotation of the virtual character is controlled, the display pose of the virtual character is controlled, and the display pose comprises: position coordinates and orientation data; the preset belonging relation is used for describing that the current trigger point is located in an overlapping area of the character moving area and the first visual angle rotating area. Therefore, the method is applied to the interactive equipment which can be held by the user in a single hand, the virtual character is displayed in a vertical screen mode, the holding difficulty of the user is reduced, the character movement area is fused with the visual angle rotation area, the user single finger does not leave the screen, the character movement and the visual angle rotation are controlled at the same time, the user can change the hand to continue to experience when holding for a long time, the user can experience the visual angle of the virtual character in an immersive mode, and the user experience is effectively improved.
The foregoing description is only an overview of the technical solutions of the embodiments of the present application, and may be implemented according to the content of the specification, so that the technical means of the embodiments of the present application can be more clearly understood, and the following detailed description of the present application will be presented in order to make the foregoing and other objects, features and advantages of the embodiments of the present application more understandable.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the following brief description of the drawings of the embodiments will be given, it being understood that the drawings described below relate only to some embodiments of the present disclosure, not to limitations of the present disclosure, in which:
fig. 1 is a flow chart illustrating a method for movement and view control of a virtual character according to the present disclosure.
Fig. 2A is a disassembled view of a hardware system of an interactive device provided by the present disclosure.
Fig. 2B is an interaction schematic of a virtual character provided by the present disclosure.
Fig. 2C is a schematic diagram of a virtual space provided by the present disclosure.
Fig. 2D is a schematic view of perspective rotation and character movement control provided by the present disclosure.
Fig. 2E is a schematic diagram of movement of a virtual character provided by the present disclosure.
Fig. 2F is a schematic diagram of a manual perspective rotation operation provided by the present disclosure.
Fig. 3 is a schematic structural view of a virtual character movement and viewing angle control apparatus provided in the present disclosure.
Fig. 4 is a schematic structural diagram of an electronic device provided in the present disclosure.
It is noted that the elements in the drawings are schematic and are not drawn to scale.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings. It will be apparent that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be made by those skilled in the art based on the described embodiments of the present disclosure without the need for creative efforts, are also within the scope of the protection of the present disclosure.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the presently disclosed subject matter belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the specification and relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. As used herein, a statement that two or more parts are "connected" or "coupled" together shall mean that the parts are joined together either directly or joined through one or more intermediate parts.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of the phrase "an embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: there are three cases, a, B, a and B simultaneously. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Terms such as "first" and "second" are used merely to distinguish one component (or portion of a component) from another component (or another portion of a component).
In the description of the present application, unless otherwise indicated, the meaning of "plurality" means two or more (including two), and similarly, "plural sets" means two or more (including two).
In order to better understand the technical solutions of the present application, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a flow chart of a method for controlling movement and viewing angle of a virtual character according to an embodiment of the present disclosure, which is applied to an interactive device, where the interactive device may be an intelligent device with a touch screen function, such as a touch screen mobile phone, a touch screen computer, etc.
As shown in fig. 2A, a three-dimensional disassembled view of a hardware system of an interaction device, the interaction device includes: a transparent touch sensor 21, a display 22, a processor 23 and a battery 24. The transparent touch sensor 21 and the display screen 22 are the same in size, are combined together by adopting a superposition covering process, and the transparent touch sensor 21, the display screen 22, the processor 23 and the battery 24 are combined together, are fastened by using a shell structure, and can be held by a single hand.
The transparent touch sensor 21 detects whether the user's finger touches and the position of the touch, transmits a touch signal to the processor 23, and the processor 23 calculates a virtual character image and transmits a display signal to the display screen 22 to display.
The battery 24 supplies power to the processor 23, the display 22, and the transparent touch sensor 21 through power supply lines of the processor 23, the display 22, and the transparent touch sensor 21, respectively.
As shown in FIG. 2B, the interactive schematic diagram of the virtual character is shown, and a first visual angle rotating area O is arranged on the display screen of the interactive device 1 And character movement area O 2 The first visual angle rotating area is used for describing the automatic visual angle rotating area, the first visual angle rotating area is overlapped with a partial area of the character moving area, and the display mode of the virtual character in the display screen of the interaction device is vertical screen display.
In the touch screen, a manual view angle rotation area and a character movement area are set as interaction areas for virtual camera rotation and character movement control, respectively. Wherein there is an overlap of the character movement area and the automatic view rotation area, meaning that both functions will be triggered simultaneously in the overlapping portion.
As shown in fig. 1, the specific process of the virtual character movement and viewing angle control method includes:
s110, acquiring triggering operation data generated on a display screen of the interactive equipment by a user, wherein the triggering operation data comprises: the current trigger point.
The current trigger point is a touch point of the user on the display screen of the interactive device at the current moment, and the current trigger point can be located at any position point on the display screen of the interactive device.
S120, detecting the area belonging relation between the current trigger point and the character moving area and the first visual angle rotating area.
The display areas of the character moving area and the first visual angle rotating area in the display screen are touchable sliding areas, and a user can control the movement of the virtual character through touching operation in the corresponding areas.
The area attribute relationship is used for describing a display area where the current trigger point is located, and the display area comprises: the character movement area and/or the first view rotation area, that is, the representation, the area-to-area relationship may include: the current trigger point is located in the character movement area/the first visual angle rotation area; or the current trigger point is located in the character movement area and the first view angle rotation area, such as an overlapping area of the character movement area and the first view angle rotation area.
The preset belonging relation is used for describing that the current trigger point is positioned in an overlapping area of the character moving area and the first view angle rotating area.
In some embodiments, the character movement area and the first perspective rotation area have two overlapping areas that are independent in position; the preset belongings include: the current trigger point is positioned in an overlapping area of the character movement area and the first visual angle rotation area; or, the current trigger point is located in another overlapping area of the character movement area and the first view rotation area.
And S130, if the current trigger point is detected, controlling the visual angle rotation of the virtual character and the display pose of the virtual character if the region belonging relation between the character moving region and the first visual angle rotating region is a preset belonging relation.
Wherein, the display gesture includes: the position coordinate and the azimuth data are azimuth/angle of the virtual character in the metaspace, and the position coordinate is the position point of the virtual character in the metaspace.
In some embodiments, controlling perspective rotation of the avatar and controlling display pose of the avatar includes:
controlling the visual angle rotation of the virtual character based on the current trigger point; acquiring a trigger path between a current trigger point and a trigger point before the current trigger point; and controlling the display pose of the virtual character based on the current trigger point, the trigger point before the current trigger point and the trigger path.
The previous trigger point of the current trigger point is a touch point operated by the user at a time previous to the current time. The trigger path between the current trigger point and the trigger point before the current trigger point comprises a sliding path between the user sliding from one trigger point at the previous moment to the trigger point at the current moment on the display screen.
The virtual space diagram of the metaspace is shown in fig. 2C, and includes: a virtual character (1), a virtual camera (2), a virtual camera target point (3) and a virtual character movement direction (4).
The avatar (1) is a set of virtual data including character shapes and actions that can be rendered by the processor 23 and displayed on the display screen 22 as a character avatar. The virtual camera (2) is a set of data recording camera parameters, position and rotation, and the virtual character (1) is rendered according to the parameters, position and rotation of the virtual camera (2). The virtual camera (2) rotates around the virtual camera target point (3), yaw in the figure is a horizontal rotation direction, pitch is a vertical rotation direction, the connection line between the virtual camera (2) and the virtual camera target point (3) is a "rocker arm" of the camera, and all view angle rotation operations in this embodiment are operations of the "rocker arm" around the virtual camera target point (3), and the virtual camera (2) moves attached to the "rocker arm". The movement direction of the virtual character (1) is a direction on the x-z plane in the figure, which is the front-back, left-right direction with respect to the virtual camera (2).
The view angle rotation and character movement control diagram is shown in fig. 2D, and the user's finger is in the character movement area O 2 Touching the touch screen from touch point a at the current trigger point 1 Move to touch point b 1 And touch point b 1 Located in the automatic viewing angle rotation region O 1 When touching point b 1 When the virtual camera is positioned in the left rotation area and the virtual camera turns to the left, the touch point b is touched 1 When the virtual camera is positioned in the right rotation area, the virtual camera turns right, and the absolute value of the turning speed is positively correlated with the absolute value of x. Due to the touch point b 1 While also being located in character movement area O 2 In this case, the movement logic is executed simultaneously, and the screen is refreshed in real time to render the virtual character, which is represented by the movement of the character while rotating the viewing angle.
In some embodiments, the first viewing angle rotation region comprises: the rotating areas corresponding to the preset direction and the rotating areas corresponding to the non-preset direction are opposite to each other. The preset direction is the left direction as shown in fig. 2D, and the non-preset direction is the right direction as shown in fig. 2D.
Controlling perspective rotation of the virtual character based on the current trigger point, comprising:
and if the current trigger point is positioned in the rotating area corresponding to the preset direction, controlling the virtual camera to rotate in the preset direction, and if the current trigger point is positioned in the rotating area corresponding to the non-preset direction, controlling the virtual camera to rotate in the non-preset direction. Thus, the rotation of the view angle of the virtual character is effectively controlled.
The virtual camera is used for acquiring image data of the virtual character and rendering the image data of the virtual character to the display screen for display, wherein the rotating area corresponding to the preset direction is one overlapping area of the first visual angle rotating area and the character moving area, and the rotating area corresponding to the non-preset direction is the other overlapping area of the first visual angle rotating area and the character moving area.
The absolute value of the moving speed of the virtual camera is positively correlated with the absolute value of the horizontal displacement between the current trigger point and the trigger point immediately before the current trigger point.
In this embodiment, a first view angle rotating area and a character moving area are set on a display screen of the interactive device, the first view angle rotating area is used for describing an automatic view angle rotating area, the first view angle rotating area overlaps with a partial area of the character moving area, and a display mode of the virtual character in the display screen of the interactive device is vertical screen display; the method comprises the steps of obtaining triggering operation data generated by a user on a display screen of the interactive device, wherein the triggering operation data comprise: a current trigger point; detecting the region belonging relation between the current trigger point and the character moving region and the first visual angle rotating region, wherein the region belonging relation is used for describing a display region where the current trigger point is located, and the display region comprises: a character movement area and/or a first view rotation area; if the current trigger point is detected, the region belonging relation between the character moving region and the first visual angle rotating region is a preset belonging relation, the visual angle rotation of the virtual character is controlled, the display pose of the virtual character is controlled, and the display pose comprises: position coordinates and orientation data; the preset belonging relation is used for describing that the current trigger point is located in an overlapping area of the character moving area and the first visual angle rotating area. Therefore, the method is applied to the interactive equipment which can be held by the user in a single hand, the virtual character is displayed in a vertical screen mode, the holding difficulty of the user is reduced, the character movement area is fused with the visual angle rotation area, the user single finger does not leave the screen, the character movement and the visual angle rotation are controlled at the same time, the user can change the hand to continue to experience when holding for a long time, the user can experience the visual angle of the virtual character in an immersive mode, and the user experience is effectively improved.
In some embodiments, further comprising: and if the current trigger point is positioned in the character moving area, controlling the display pose of the virtual character based on the current trigger point, the trigger point before the current trigger point and the trigger path.
The movement of the virtual character is schematically shown in FIG. 2E, when the user's finger is in the character movement area O 2 Touching the touch screen from touch point a at the current trigger point 2 Move to touch point b 2 When the virtual character turns to and moves in the direction of the D vector relative to the virtual camera, the moving speed of the virtual character is positively related to the length of D, and the screen refreshes and renders the virtual character in real time。
In some embodiments, controlling the display pose of the virtual character based on the current trigger point, a trigger point previous to the current trigger point, and a trigger path, includes:
determining a virtual moving path of the virtual character based on the current trigger point, a trigger point previous to the current trigger point and the trigger path; adjusting azimuth data of the virtual character based on azimuth information between the virtual starting point and the virtual target point; the virtual character is controlled to move from the virtual starting point to the virtual target point along the virtual moving path.
The virtual starting point corresponding to the virtual moving path corresponds to a trigger point before the current trigger point, and the virtual target point corresponding to the virtual moving path corresponds to the current trigger point.
Azimuth information between the virtual starting point and the virtual target point is the direction between the virtual starting point and the virtual target point. The virtual movement path may include a plurality of virtual movement points, each virtual movement point corresponding to one of the trigger points in the trigger path.
The virtual moving path of the virtual character is determined based on the current trigger point, the trigger point before the current trigger point and the trigger path, and the virtual moving path of the virtual character can be determined based on the mapping relation of the three-dimensional model corresponding to the virtual character. The mapping relationship is used to describe the relationship between a trigger point and a virtual mobile point.
Therefore, the azimuth data of the virtual character is adjusted through the azimuth information between the virtual starting point and the virtual target point; the virtual character is controlled to move along the virtual moving path from the virtual starting point to the virtual target point, so that the virtual character is controlled to move in space.
In some embodiments, a second visual angle rotating area is further disposed on the display screen of the interactive device, the second visual angle rotating area is used for describing the manual visual angle rotating area, and the second visual angle rotating area is independent of the first visual angle rotating area and the character moving area respectively.
The manual view angle rotation operation is schematically shown in FIG. 2F, whichIn the second viewing angle rotation region O 3 A second view angle rotation region O located in the upper region of the first view angle rotation region and the character movement region 3 Independent of the first viewing angle rotation region, the second viewing angle rotation region O 3 Independent of character movement zone location.
The method of the embodiment can further comprise the following steps:
if the current trigger point is located in the second visual angle rotation region, acquiring a first rotation angle value and a second rotation angle value of the virtual camera; updating a first rotation angle value of the virtual camera based on the current trigger point and a trigger point previous to the current trigger point; the second rotation angle value of the virtual camera is updated based on the current trigger point and a trigger point preceding the current trigger point.
The first rotation angle value and the second rotation angle value are rotation angle values in two directions perpendicular to each other. The first rotation angle value is a horizontal rotation angle value, and the second rotation angle value is a vertical rotation angle value.
The first rotation angle value of the virtual camera is updated based on the current trigger point and a trigger point previous to the current trigger point, and is updated based on the horizontal displacement between the trigger point previous to the current trigger point and the current trigger point. The second rotation angle value of the virtual camera is updated based on the current trigger point and a trigger point previous to the current trigger point, as based on a vertical displacement between the trigger point previous to the current trigger point and the current trigger point.
Referring to FIG. 2F, when the user's finger rotates the region O at the manual viewing angle 3 Touching the touch screen from touch point a at the current trigger point 3 Move to touch point b 3 When the virtual camera view angle increases by x relative to the horizontal rotation angle Yaw (i.e. the first rotation angle value) of the virtual camera view angle when the virtual camera is touched at the previous moment, and increases by y relative to the vertical rotation angle Pitch (i.e. the second rotation angle value) of the virtual camera view angle when the virtual camera is touched at the previous moment, wherein x, y, yaw, pitch can be negative, and the screen refreshes and renders the virtual character in real time.
Therefore, when the user performs manual rotation operation in the manual visual angle rotation area, the rotation angles of the virtual camera in different directions are updated based on the manual operation data of the user, so that the display visual angles of the virtual characters can be rendered and displayed in real time.
In summary, the present embodiment uses a touch screen arranged longitudinally (aspect ratio < 1) to realize a single-hand holding and single-finger operation condition while controlling the view angle rotation and virtual character movement interaction mode; by partially overlapping the character movement area and the automatic visual angle rotation area, the visual angle rotation and the virtual character movement can be simultaneously controlled by a single finger without leaving the screen; the manual visual angle rotation area is arranged outside the character movement area and the automatic visual angle rotation area, so that independent visual angle rotation control is realized; the character movement area and the automatic visual angle rotation area are not completely overlapped, so that the character movement under a fixed visual angle can be realized, the visual angle rotation frequency can be reduced, and the uncomfortable feeling of a user is reduced; the single finger is used for simultaneously controlling the rotation of the visual angle and the virtual role, so that the single finger can not leave the screen to watch different positions and directions of the virtual space.
Compared with a rocker or key input, the touch screen can reduce the volume of the device, keep the high screen duty ratio and maximize the display area in a limited volume. The arrangement of the portrait orientation (aspect ratio < 1) and the one-hand interaction compared to a touch screen arranged in the landscape orientation (aspect ratio > 1) eliminates discomfort to the user due to long-term lifting of the hands, which is much lower than a two-hand grip because the one-hand grip device is interchangeable. Compared with the design that the camera direction is kept motionless and can be changed according to the needs of the user, the design can ensure that the user can experience the virtual scene from any position and angle in an immersive manner from the view angle of the virtual character. Compared with the design of only an independent manual visual angle rotating area, the embodiment has the independent manual visual angle rotating area and the automatic visual angle rotating area fused with the character moving area, so that the movement and visual angle rotation can be controlled simultaneously without separating a single finger from a screen, and a user can easily realize the control of high degree of freedom.
In addition, the embodiment can set intelligent visual angle rotation in a partial area in the virtual scene, so that a user can be helped to select an interested angle. The position of eyes of a user in the physical space is identified through the front-facing camera, and the visual angle is rotated according to the position of the eyes.
Fig. 3 is a schematic structural diagram of a device for controlling movement and view angle of a virtual character, which is provided in this embodiment, and is applied to an interactive device, where a display screen of the interactive device is provided with a first view angle rotation area and a character movement area, the first view angle rotation area is used for describing an automatic view angle rotation area, the first view angle rotation area overlaps with a partial area of the character movement area, and a display mode of the virtual character in the display screen of the interactive device is vertical screen display.
Wherein the movement and viewing angle control means of the avatar may include: an acquisition module 310, a detection module 320, and a control module 330.
The obtaining module 310 is configured to obtain trigger operation data generated by a user on a display screen of the interactive device, where the trigger operation data includes: the current trigger point.
The detection module 320 is configured to determine a relationship between the current trigger point and a region of the character movement region and the first perspective rotation region, where the region relationship is used to describe a display region where the current trigger point is located, and the display region includes: a character movement area and/or a first view rotation area.
The control module 330 is configured to control, if the current trigger point is detected and the region-to-region relationship between the character movement region and the first view angle rotation region is a preset relationship, rotation of the view angle of the virtual character and a display pose of the virtual character, where the display pose includes: position coordinates and orientation data.
The preset belonging relation is used for describing that the current trigger point is located in an overlapping area of the character moving area and the first visual angle rotating area.
In this embodiment, alternatively, the character moving area and the first view angle rotating area have two overlapping areas with independent positions; the preset belongings include: the current trigger point is positioned in an overlapping area of the character movement area and the first visual angle rotation area; or, the current trigger point is located in another overlapping area of the character movement area and the first view rotation area.
In this embodiment, optionally, the control module 330 includes: the device comprises a first control unit, an acquisition unit and a second control unit.
And the first control unit is used for controlling the visual angle rotation of the virtual character based on the current trigger point.
And the acquisition unit is used for acquiring the trigger path between the current trigger point and the trigger point before the current trigger point.
And the second control unit is used for controlling the display pose of the virtual character based on the current trigger point, the trigger point before the current trigger point and the trigger path.
In this embodiment, optionally, the first viewing angle rotation region includes: the rotating areas corresponding to the preset direction and the rotating areas corresponding to the non-preset direction are opposite to each other.
The first control unit is specifically configured to:
if the current trigger point is positioned in a rotating area corresponding to the preset direction, the virtual camera is controlled to rotate in the preset direction, and if the current trigger point is positioned in a rotating area corresponding to the non-preset direction, the virtual camera is controlled to rotate in the non-preset direction; the virtual camera is used for acquiring image data of the virtual character and rendering the image data of the virtual character to the display screen for display, wherein the rotating area corresponding to the preset direction is one overlapping area of the first visual angle rotating area and the character moving area, and the rotating area corresponding to the non-preset direction is the other overlapping area of the first visual angle rotating area and the character moving area.
In this embodiment, optionally, the second control unit is specifically configured to:
determining a virtual moving path of the virtual character based on the current trigger point, the trigger point before the current trigger point and the trigger path, wherein a virtual starting point corresponding to the virtual moving path corresponds to the trigger point before the current trigger point, and a virtual target point corresponding to the virtual moving path corresponds to the current trigger point; adjusting azimuth data of the virtual character based on azimuth information between the virtual starting point and the virtual target point; the virtual character is controlled to move from the virtual starting point to the virtual target point along the virtual moving path.
In this embodiment, the control module 330 is further configured to, if the current trigger point is located in the character movement area, control the display pose of the virtual character based on the current trigger point, the trigger point that is the previous to the current trigger point, and the trigger path.
In this embodiment, optionally, a second perspective rotation area is further provided on the display screen of the interactive device, where the second perspective rotation area is used to describe a manual perspective rotation area, and the second perspective rotation area is independent of the first perspective rotation area and the character movement area respectively.
Further comprises: and updating the module.
The obtaining module 310 is further configured to obtain a first rotation angle value and a second rotation angle value of the virtual camera if the current trigger point is located in the second view angle rotation region, where the first rotation angle value and the second rotation angle value are rotation angle values in two directions perpendicular to each other.
The updating module is used for updating the first rotation angle value of the virtual camera based on the current trigger point and the trigger point before the current trigger point; the second rotation angle value of the virtual camera is updated based on the current trigger point and a trigger point preceding the current trigger point.
The virtual character movement and viewing angle control device provided in the present disclosure may execute the above method embodiment, and the specific implementation principle and technical effects of the virtual character movement and viewing angle control device may refer to the above method embodiment, which is not described herein.
The embodiment of the application also provides electronic equipment. Referring specifically to fig. 4, fig. 4 is a basic structural block diagram of the electronic device according to the present embodiment.
The electronic device includes a memory 410 and a processor 420 communicatively coupled to each other via a system bus. It should be noted that only an electronic device having a memory 410 and a processor 420 is shown in the figures, but it should be understood that not all illustrated components need be implemented and that more or fewer components may be implemented instead. It will be understood by those skilled in the art that the electronic device herein is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Programmable gate array (FPGA), a digital processor (Digital Signal Processor, DSP), an embedded device, and the like.
The electronic device may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, and the like. The electronic device can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 410 includes at least one type of readable storage medium including non-volatile memory (non-volatile memory) or volatile memory, such as flash memory (flash memory), hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random access memory (random access memory, RAM), read-only memory (ROM), erasable programmable read-only memory (erasable programmable read-only memory, EPROM), electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), programmable read-only memory (programmable read-only memory, PROM), magnetic memory, magnetic disk, optical disk, etc., which may include static RAM or dynamic RAM. In some embodiments, the memory 410 may be an internal storage unit of the electronic device, such as a hard disk or a memory of the electronic device. In other embodiments, the memory 410 may also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, or a Flash Card (Flash Card) provided on the electronic device. Of course, the memory 410 may also include both internal storage units of the electronic device and external storage devices. In this embodiment, the memory 410 is typically used to store an operating system and various types of application software installed on the electronic device, such as program codes of the above-described methods. In addition, the memory 410 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 420 is generally used to perform the overall operations of the electronic device. In this embodiment, the memory 410 is used for storing program codes or instructions, the program codes include computer operation instructions, and the processor 420 is used for executing the program codes or instructions stored in the memory 410 or processing data, such as the program codes for executing the above-mentioned method.
Herein, the bus may be an Industry standard architecture (Industry StandardArchitecture, ISA) bus, a peripheral component interconnect (Peripheral Component Interconnect, PCI) bus, or an extended Industry standard architecture (Extended Industry StandardArchitecture, EISA) bus, among others. The bus system may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
Another embodiment of the present application also provides a computer-readable medium, which may be a computer-readable signal medium or a computer-readable medium. A processor in a computer reads computer readable program code stored in a computer readable medium, such that the processor is capable of performing the functional actions specified in each step or combination of steps in the above-described method; a means for generating a functional action specified in each block of the block diagram or a combination of blocks.
The computer readable medium includes, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared memory or semiconductor system, apparatus or device, or any suitable combination of the foregoing, the memory storing program code or instructions, the program code including computer operating instructions, and the processor executing the program code or instructions of the above-described methods stored by the memory.
The definition of the memory and the processor may refer to the description of the foregoing electronic device embodiments, and will not be repeated here.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The functional units or modules in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all or part of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, including several instructions to cause an electronic device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RandomAccess Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps other than those listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of first, second, third, etc. does not denote any order, and the words are to be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specifically stated.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. The method is characterized in that a first visual angle rotating area and a character moving area are arranged on a display screen of the interactive equipment, the first visual angle rotating area is used for describing an automatic visual angle rotating area, the first visual angle rotating area is overlapped with a part of area of the character moving area, and the display mode of the virtual character in the display screen of the interactive equipment is vertical screen display;
the method comprises the following steps:
acquiring trigger operation data generated by a user on a display screen of the interactive device, wherein the trigger operation data comprises: a current trigger point;
detecting the region belonging relation between the current trigger point and the character moving region and the first visual angle rotating region, wherein the region belonging relation is used for describing a display region where the current trigger point is located, and the display region comprises: the character movement area and/or the first view rotation area;
if the current trigger point is detected, the region affiliated relation between the character moving region and the first visual angle rotating region is a preset affiliated relation, the visual angle rotation of the virtual character is controlled, and the display pose of the virtual character is controlled, wherein the display pose comprises: position coordinates and orientation data;
The preset belonging relation is used for describing that the current trigger point is located in an overlapping area of the character moving area and the first visual angle rotating area.
2. The method of claim 1, wherein the character movement region and the first perspective rotation region have two overlapping regions that are independent in position;
the preset belonging relation comprises the following steps: the current trigger point is located in an overlapping region of the character movement region and the first perspective rotation region;
or, the current trigger point is located in another overlapping region of the character movement region and the first view rotation region.
3. The method of claim 1, wherein the controlling the perspective rotation of the avatar and controlling the display pose of the avatar comprises:
controlling the visual angle rotation of the virtual character based on the current trigger point;
acquiring a trigger path between the current trigger point and a trigger point before the current trigger point;
and controlling the display pose of the virtual character based on the current trigger point, the trigger point before the current trigger point and the trigger path.
4. The method of claim 3, wherein the first perspective rotation zone comprises: the method comprises the steps that a rotating area corresponding to a preset direction and a rotating area corresponding to a non-preset direction are formed, and the preset direction and the non-preset direction are opposite to each other;
The controlling the rotation of the visual angle of the virtual character based on the current trigger point comprises the following steps:
if the current trigger point is positioned in a rotating area corresponding to the preset direction, the virtual camera is controlled to rotate towards the preset direction, and if the current trigger point is positioned in a rotating area corresponding to the non-preset direction, the virtual camera is controlled to rotate towards the non-preset direction;
the virtual camera is used for acquiring image data of the virtual character and rendering the image data of the virtual character to the display screen for display, and the rotating area corresponding to the preset direction is one overlapping area of the first visual angle rotating area and the character moving area, and the rotating area corresponding to the non-preset direction is the other overlapping area of the first visual angle rotating area and the character moving area.
5. A method according to claim 3, wherein said controlling the display pose of the virtual character based on the current trigger point, a trigger point preceding the current trigger point, and the trigger path comprises:
determining a virtual moving path of the virtual character based on the current trigger point, a trigger point before the current trigger point and the trigger path, wherein a virtual starting point corresponding to the virtual moving path corresponds to the trigger point before the current trigger point, and a virtual target point corresponding to the virtual moving path corresponds to the current trigger point;
Adjusting azimuth data of the virtual character based on azimuth information between the virtual starting point and the virtual target point;
and controlling the virtual character to move from the virtual starting point to the virtual target point along the virtual moving path.
6. A method according to claim 3, further comprising:
and if the current trigger point is positioned in the character moving area, controlling the display pose of the virtual character based on the current trigger point, the trigger point before the current trigger point and the trigger path.
7. The method of claim 4, wherein a second perspective rotation area is further provided on the display screen of the interactive device, the second perspective rotation area being used for describing a manual perspective rotation area, the second perspective rotation area being independent of the first perspective rotation area and the character movement area, respectively;
the method further comprises the steps of:
if the current trigger point is located in the second visual angle rotation region, a first rotation angle value and a second rotation angle value of the virtual camera are obtained, wherein the first rotation angle value and the second rotation angle value are rotation angle values in two directions perpendicular to each other;
Updating the first rotation angle value of the virtual camera based on the current trigger point and a trigger point previous to the current trigger point;
updating the second rotation angle value of the virtual camera based on the current trigger point and a trigger point previous to the current trigger point.
8. The virtual character moving and visual angle control device is characterized by being applied to interaction equipment, wherein a first visual angle rotating area and a character moving area are arranged on a display screen of the interaction equipment, the first visual angle rotating area is used for describing an automatic visual angle rotating area, the first visual angle rotating area is overlapped with a part of area of the character moving area, and the display mode of the virtual character in the display screen of the interaction equipment is vertical screen display;
the device comprises:
the acquisition module is used for acquiring trigger operation data generated on a display screen of the interactive equipment by a user, wherein the trigger operation data comprises: a current trigger point;
the detection module is configured to detect a region-to-region relationship between the current trigger point and the character movement region and the first perspective rotation region, where the region-to-region relationship is used to describe a display region where the current trigger point is located, and the display region includes: the character movement area and/or the first view rotation area;
The control module is configured to control, if the current trigger point is detected and the region relationship between the character movement region and the first view angle rotation region is a preset relationship, the view angle rotation of the virtual character and the display pose of the virtual character, where the display pose includes: position coordinates and orientation data;
the preset belonging relation is used for describing that the current trigger point is located in an overlapping area of the character moving area and the first visual angle rotating area.
9. An electronic device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the method of movement and perspective control of a virtual character according to any one of claims 1 to 7 when the computer program is executed.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements a method for movement and perspective control of a virtual character according to any one of claims 1 to 7.
CN202311727255.3A 2023-12-15 2023-12-15 Virtual character movement and visual angle control method, device, electronic equipment and medium Pending CN117695648A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311727255.3A CN117695648A (en) 2023-12-15 2023-12-15 Virtual character movement and visual angle control method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311727255.3A CN117695648A (en) 2023-12-15 2023-12-15 Virtual character movement and visual angle control method, device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117695648A true CN117695648A (en) 2024-03-15

Family

ID=90156518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311727255.3A Pending CN117695648A (en) 2023-12-15 2023-12-15 Virtual character movement and visual angle control method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117695648A (en)

Similar Documents

Publication Publication Date Title
JP6013583B2 (en) Method for emphasizing effective interface elements
US9519371B2 (en) Display device and control method therefor
US9437038B1 (en) Simulating three-dimensional views using depth relationships among planes of content
US10754546B2 (en) Electronic device and method for executing function using input interface displayed via at least portion of content
US20200364897A1 (en) Method and device for detecting planes and/or quadtrees for use as a virtual substrate
US20140118268A1 (en) Touch screen operation using additional inputs
CN110633008B (en) User interaction interpreter
CN106527722A (en) Interactive method and system in virtual reality and terminal device
US11023035B1 (en) Virtual pinboard interaction using a peripheral device in artificial reality environments
US9588670B2 (en) Information-processing device, storage medium, information-processing method, and information-processing system
US9158431B2 (en) Apparatus and method for manipulating the orientation of an object on a display device
JP7238143B2 (en) MOVEMENT CONTROL METHOD AND APPARATUS THEREOF, TERMINAL AND COMPUTER PROGRAM FOR VIRTUAL OBJECT
US10976804B1 (en) Pointer-based interaction with a virtual surface using a peripheral device in artificial reality environments
KR20140100547A (en) Full 3d interaction on mobile devices
WO2019166005A1 (en) Smart terminal, sensing control method therefor, and apparatus having storage function
US9665232B2 (en) Information-processing device, storage medium, information-processing method, and information-processing system for enlarging or reducing an image displayed on a display device
US10073612B1 (en) Fixed cursor input interface for a computer aided design application executing on a touch screen device
US20140115532A1 (en) Information-processing device, storage medium, information-processing method, and information-processing system
CN104820584B (en) Construction method and system of 3D gesture interface for hierarchical information natural control
US11023036B1 (en) Virtual drawing surface interaction using a peripheral device in artificial reality environments
CN117130518A (en) Control display method, head display device, electronic device and readable storage medium
CN113457144B (en) Virtual unit selection method and device in game, storage medium and electronic equipment
CN112534390A (en) Electronic device for providing virtual input tool and method thereof
CN117695648A (en) Virtual character movement and visual angle control method, device, electronic equipment and medium
JP2016018363A (en) Game program for display-controlling object arranged on virtual space plane

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination