CN106502401B - Image control method and device - Google Patents

Image control method and device Download PDF

Info

Publication number
CN106502401B
CN106502401B CN201610932769.6A CN201610932769A CN106502401B CN 106502401 B CN106502401 B CN 106502401B CN 201610932769 A CN201610932769 A CN 201610932769A CN 106502401 B CN106502401 B CN 106502401B
Authority
CN
China
Prior art keywords
gesture operation
operation signal
control
virtual reality
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610932769.6A
Other languages
Chinese (zh)
Other versions
CN106502401A (en
Inventor
应良佳
周洋
胡亚娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Original Assignee
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yulong Computer Telecommunication Scientific Shenzhen Co Ltd filed Critical Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority to CN201610932769.6A priority Critical patent/CN106502401B/en
Publication of CN106502401A publication Critical patent/CN106502401A/en
Application granted granted Critical
Publication of CN106502401B publication Critical patent/CN106502401B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/048023D-info-object: information is displayed on the internal or external surface of a three dimensional manipulable object, e.g. on the faces of a cube that can be rotated by the user

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses an image control method, which comprises the following steps: acquiring a gesture operation signal input aiming at a target object stereo image in a virtual reality display scene, and acquiring a control parameter carried by the gesture operation signal; generating a virtual reality gesture operation model according to the gesture operation signal; and synthesizing the virtual reality gesture operation model and the target object three-dimensional image to generate a target three-dimensional image, performing control operation on the target three-dimensional image based on the control parameters, and displaying the target three-dimensional image in the virtual reality display scene. The embodiment of the invention also discloses an image control device. By adopting the invention, the gesture operation signal is used for carrying out operation control such as rotation, translation or folding on the three-dimensional image of the target object, so that any angle of each object contained in the original image can be conveniently and rapidly displayed, the convenience of operation is improved, and the operability and interestingness are enhanced.

Description

Image control method and device
Technical Field
The invention relates to the technical field of computers, in particular to an image control method and device.
Background
The existing user terminals such as mobile phones, tablet computers, notebook computers and the like are all provided with cameras, and users can take pictures or make video shots through the cameras.
After the shooting is finished, the image can be checked through the display interface of the user terminal, because the angle of each object contained in the image is consistent with the original view angle, in order to check each object contained in the images with different angles, the user needs to adjust the direction of the camera for multiple times to shoot for multiple times, or spend a great deal of time in the later period to process the shot picture by utilizing the retouching software, thereby bringing great inconvenience to the use of people.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide an image control method and apparatus, which perform operation control such as rotation, translation or folding on a stereoscopic image of a target object through a gesture operation signal, so as to conveniently and rapidly display any angle of each object included in an original image, improve convenience of operation, and enhance operability and interestingness.
In order to solve the above technical problem, an embodiment of the present invention provides an image control method, where the method includes:
acquiring a gesture operation signal input aiming at a target object stereo image in a virtual reality display scene, and acquiring a control parameter carried by the gesture operation signal;
generating a virtual reality gesture operation model according to the gesture operation signal;
and synthesizing the virtual reality gesture operation model and the target object three-dimensional image to generate a target three-dimensional image, performing control operation on the target three-dimensional image based on the control parameters, and displaying the target three-dimensional image in the virtual reality display scene.
Correspondingly, an embodiment of the present invention further provides an image control apparatus, including:
the parameter acquisition module is used for acquiring gesture operation signals input aiming at a three-dimensional image of a target object in a virtual reality display scene and acquiring control parameters carried by the gesture operation signals;
the model generating module is used for generating a virtual reality gesture operation model according to the gesture operation signal;
and the image control module is used for synthesizing the virtual reality gesture operation model and the target object three-dimensional image to generate a target three-dimensional image, controlling the target three-dimensional image based on the control parameters, and displaying the target three-dimensional image in the virtual reality display scene.
In the embodiment of the invention, after the image of the target object is acquired by the fixed camera or the rotatable camera, the three-dimensional display is carried out in the virtual reality display scene, when the gesture operation signal input aiming at the three-dimensional image of the target object in the virtual reality display scene is acquired, the control parameter carried by the gesture operation signal is acquired, then the virtual reality gesture operation model is generated according to the gesture operation signal, then the virtual reality gesture operation model is synthesized with the three-dimensional image of the target object to generate the three-dimensional image of the target, the control operation is carried out on the three-dimensional image of the target based on the control parameter, and the three-dimensional image of the target is displayed in the virtual reality display scene. The three-dimensional image of the target object is subjected to operation control such as rotation, translation or folding through the gesture operation signal, so that any angle of each object contained in the original image can be conveniently and quickly displayed, the convenience of operation is improved, and the operability and interestingness are enhanced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of an image control method according to an embodiment of the present invention;
FIG. 2 is an interface diagram of a virtual reality gesture manipulation model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an interface for controlling operations according to an embodiment of the present invention;
FIG. 4 is a flow chart of an image control method according to another embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an image control apparatus according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of another image control apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another image control apparatus in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "including" and "having," and any variations thereof, in the description and claims of this invention and the above-described drawings are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The execution of the image control method mentioned in the embodiments of the present invention relies on a computer program, and may be executed on a computer system of the von-karman system based on an image control device. The computer system may be a server. The image control device provided by the embodiment of the invention can be terminal equipment such as a personal computer, a tablet computer, a notebook computer, a smart phone, a palm computer and mobile internet equipment (MID).
The following are detailed below.
Fig. 1 is a schematic flow chart of an image control method according to an embodiment of the present invention, where the method at least includes:
step S101, acquiring a gesture operation signal input aiming at a target object stereo image in a virtual reality display scene, and acquiring a control parameter carried by the gesture operation signal;
specifically, a Virtual Reality (also called Virtual Reality, VR) is a Virtual world that uses computer simulation technology to generate a three-dimensional space, and can provide a simulation about vision, hearing, and touch in the real world for a user, and the user can interact with things in the space, can move according to his will, and has a sense of integration and a sense of participation. The basic principle is that a three-dimensional space environment is simulated by a computer, other special hardware equipment (such as a video helmet, a 3D sound, a force feedback game device, VR eyes and the like) and information software, and a user interacts with the computer in the virtual world. The scene displayed in the virtual reality is a virtual reality display scene, and the target object three-dimensional image is a three-dimensional object displayed in the virtual reality display scene.
In this embodiment, after a user enters a Virtual Reality display scene by wearing a Virtual Reality (VR) device such as a VR environment, a target object stereo image displayed in a Virtual Reality display interface can be observed, and when the user performs a gesture operation on the target object stereo image, the image control device collects an input gesture operation signal and obtains control parameters carried by the gesture operation signal, such as an operation control angle and an operation control distance. The gesture operation may be a folding operation, a rotating operation, a moving operation, and the like, and is not limited in particular here.
Step S102, generating a virtual reality gesture operation model according to the gesture operation signal;
specifically, the gesture operation signal comprises a finger opening degree, a finger curvature degree and hand contour information, and the virtual reality gesture operation model can be generated according to the finger opening degree, the finger curvature degree and the hand contour information. The virtual reality gesture operation model generation method comprises the steps of generating a virtual reality gesture operation model by using a preset modeling algorithm or function, and taking the degree of opening and closing of fingers, the degree of bending of the fingers and the hand contour information as input parameters to obtain corresponding output, wherein the output is the generated virtual reality gesture operation model; and a virtual reality gesture operation model corresponding to the finger opening degree, the finger curvature and the human hand contour information of the gesture operation signal can be searched in a preset gesture operation model set.
For example, as shown in fig. 2, where a and B are the finger curvatures of the ring finger and the little finger of the user, and C is the finger opening degree between the ring finger and the little finger, the virtual reality gesture operation model can be generated by obtaining the finger curvature of each finger and the finger opening degree between two adjacent fingers, and then adding the contour model of the hand (such as the length and thickness of each finger).
Step S103, synthesizing the virtual reality gesture operation model and the target object three-dimensional image to generate a target three-dimensional image, performing control operation on the target three-dimensional image based on the control parameters, and displaying the target three-dimensional image in the virtual reality display scene.
Specifically, the method includes synthesizing a target object stereo image and a virtual reality gesture operation model, wherein the synthesized object is the target stereo image, controlling the target stereo image based on control parameters (such as an operation control angle, an operation control distance and the like) carried by a gesture operation signal, and displaying the target stereo image after the operation is completed, wherein the control operation includes at least one of folding operation, translation operation and rotation operation.
For example, as shown in fig. 3, the user a1 is hanging a horizontal bar, and after synthesizing a1 with the gesture operation model, controls an operation, such as a flipping operation, based on a sliding movement of the hand, thereby controlling a1 to stand upside down on the horizontal bar.
Optionally, the target stereo image after the control operation may be displayed according to a preset display effect, for example, according to a preset display scale, a preset display color, or a preset display brightness.
In the embodiment of the invention, after the image of the target object is acquired by the fixed camera or the rotatable camera, the three-dimensional display is carried out in the virtual reality display scene, when the gesture operation signal input aiming at the three-dimensional image of the target object in the virtual reality display scene is acquired, the control parameter carried by the gesture operation signal is acquired, then the virtual reality gesture operation model is generated according to the gesture operation signal, then the virtual reality gesture operation model is synthesized with the three-dimensional image of the target object to generate the three-dimensional image of the target, the control operation is carried out on the three-dimensional image of the target based on the control parameter, and the three-dimensional image of the target is displayed in the virtual reality display scene. The three-dimensional image of the target object is subjected to operation control such as rotation, translation or folding through the gesture operation signal, so that any angle of each object contained in the original image can be conveniently and quickly displayed, the convenience of operation is improved, and the operability and interestingness are enhanced.
Fig. 4 is a schematic flow chart of an image control method according to another embodiment of the present invention, as shown in the figure, the method at least includes:
step S201, acquiring gesture operation signals input aiming at a target object three-dimensional image in a virtual reality display scene;
specifically, a Virtual Reality (also called Virtual Reality, VR) is a Virtual world that uses computer simulation technology to generate a three-dimensional space, and can provide a simulation about vision, hearing, and touch in the real world for a user, and the user can interact with things in the space, can move according to his will, and has a sense of integration and a sense of participation. The basic principle is that a three-dimensional space environment is simulated by a computer, other special hardware equipment (such as a video helmet, a 3D sound, a force feedback game device, VR eyes and the like) and information software, and a user interacts with the computer in the virtual world. The scene displayed in the virtual reality is a virtual reality display scene, and the target object three-dimensional image is a three-dimensional object displayed in the virtual reality display scene.
In this embodiment, after a user enters a Virtual Reality display scene by wearing a Virtual Reality (VR) device such as a VR environment, the user can observe a target object stereoscopic image displayed in a Virtual Reality display interface, and when the user performs a gesture operation on the target object stereoscopic image, the image control device acquires an input gesture operation signal. The gesture operation may be a folding operation, a rotating operation, a moving operation, and the like, and is not limited in particular here.
Step S202, judging whether the gesture operation signal is matched with a preset trigger operation signal;
specifically, the image control device matches the acquired gesture operation signal with a preset trigger operation signal, and if the matching is successful, it indicates that the gesture operation signal is used for controlling the target object stereo image.
Optionally, if the preset trigger operation signal is a specific sliding gesture, the specific sliding gesture includes a sliding track and a sliding direction, and after the sliding track of the gesture operation signal is matched with the preset sliding track and the sliding direction of the gesture operation signal is matched with the preset sliding direction, if the matching error is within a preset error range, it is determined that the matching is successful.
Step S203, if the gesture operation signal is matched with the preset trigger operation signal, obtaining control parameters carried by the gesture operation signal, wherein the control parameters comprise an operation control angle and an operation control distance;
specifically, when the gesture operation signal is successfully matched with the preset trigger operation signal, the control parameters carried by the gesture operation signal, such as an operation control angle (rotation angle) and an operation control distance (movement distance), are acquired.
Step S204, generating an image control instruction according to the operation control angle and the operation control distance;
specifically, the image control instruction is used for instructing the image control device to execute control operation, and the instruction carries parameters such as an operation control angle and an operation control distance. For example, if the operation control angle is 90 degrees clockwise and the operation control distance is 0, a rotation instruction for rotating the target stereoscopic image 90 degrees clockwise in place is generated; if the operation control angle is 0 degree and the operation control distance is 10 cm, a movement instruction for translating the target stereoscopic image by 10 cm is generated, and the like.
Step S205, generating a virtual reality gesture operation model according to the gesture operation signal;
specifically, the gesture operation signal comprises a finger opening degree, a finger curvature degree and hand contour information, and the virtual reality gesture operation model can be generated according to the finger opening degree, the finger curvature degree and the hand contour information. The virtual reality gesture operation model generation method comprises the steps of generating a virtual reality gesture operation model by using a preset modeling algorithm or function, and taking the degree of opening and closing of fingers, the degree of bending of the fingers and the hand contour information as input parameters to obtain corresponding output, wherein the output is the generated virtual reality gesture operation model; and a virtual reality gesture operation model corresponding to the finger opening degree, the finger curvature and the human hand contour information of the gesture operation signal can be searched in a preset gesture operation model set.
For example, as shown in fig. 2, where a and B are the finger curvatures of the ring finger and the little finger of the user, and C is the finger opening degree between the ring finger and the little finger, the virtual reality gesture operation model can be generated by obtaining the finger curvature of each finger and the finger opening degree between two adjacent fingers, and then adding the contour model of the hand (such as the length and thickness of each finger).
Step S206, synthesizing the virtual reality gesture operation model and the target object stereoscopic image to generate a target stereoscopic image, performing a control operation on the target stereoscopic image based on the image control instruction, and displaying the target stereoscopic image in the virtual reality display scene.
Specifically, the method includes synthesizing a target object stereo image and a virtual reality gesture operation model, wherein the synthesized object is the target stereo image, controlling the target stereo image based on control parameters (such as an operation control angle, an operation control distance and the like) carried by a gesture operation signal, and displaying the target stereo image after the operation is completed, wherein the control operation includes at least one of folding operation, translation operation and rotation operation.
For example, as shown in fig. 3, the user a1 is hanging a horizontal bar, and after synthesizing a1 with the gesture operation model, controls an operation, such as a flipping operation, based on a sliding movement of the hand, thereby controlling a1 to stand upside down on the horizontal bar.
Optionally, the target stereo image after the control operation may be displayed according to a preset display effect, for example, according to a preset display scale, a preset display color, or a preset display brightness.
In the embodiment of the invention, after the image of the target object is acquired by the fixed camera or the rotatable camera, the three-dimensional display is carried out in the virtual reality display scene, when the gesture operation signal input aiming at the three-dimensional image of the target object in the virtual reality display scene is acquired, the control parameter carried by the gesture operation signal is acquired, then the virtual reality gesture operation model is generated according to the gesture operation signal, then the virtual reality gesture operation model is synthesized with the three-dimensional image of the target object to generate the three-dimensional image of the target, the control operation is carried out on the three-dimensional image of the target based on the control parameter, and the three-dimensional image of the target is displayed in the virtual reality display scene. The three-dimensional image of the target object is subjected to operation control such as rotation, translation or folding through the gesture operation signal, so that any angle of each object contained in the original image can be conveniently and quickly displayed, the convenience of operation is improved, and the operability and interestingness are enhanced.
Fig. 5 is a schematic structural diagram of an image control apparatus according to an embodiment of the present invention, where the image control apparatus includes:
the parameter acquiring module 10 is configured to acquire a gesture operation signal input for a target object stereoscopic image in a virtual reality display scene, and acquire a control parameter carried by the gesture operation signal;
specifically, a Virtual Reality (also called Virtual Reality, VR) is a Virtual world that uses computer simulation technology to generate a three-dimensional space, and can provide a simulation about vision, hearing, and touch in the real world for a user, and the user can interact with things in the space, can move according to his will, and has a sense of integration and a sense of participation. The basic principle is that a three-dimensional space environment is simulated by a computer, other special hardware equipment (such as a video helmet, a 3D sound, a force feedback game device, VR eyes and the like) and information software, and a user interacts with the computer in the virtual world. The scene displayed in the virtual reality is a virtual reality display scene, and the target object three-dimensional image is a three-dimensional object displayed in the virtual reality display scene.
In this embodiment, after a user enters a Virtual Reality display scene by wearing a Virtual Reality (VR) device such as a VR environment, a target object stereo image displayed in a Virtual Reality display interface can be observed, and when the user performs a gesture operation on the target object stereo image, the image control device collects an input gesture operation signal and obtains control parameters carried by the gesture operation signal, such as an operation control angle and an operation control distance. The gesture operation may be a folding operation, a rotating operation, a moving operation, and the like, and is not limited in particular here.
The model generating module 20 is configured to generate a virtual reality gesture operation model according to the gesture operation signal;
optionally, the gesture operation signal includes a finger opening degree, a finger curvature degree and hand contour information;
the model generation module 20 is specifically configured to:
and generating the virtual real-environment gesture operation model according to the finger opening degree, the finger bending degree and the hand contour information.
Specifically, the gesture operation signal comprises a finger opening degree, a finger curvature degree and hand contour information, and the virtual reality gesture operation model can be generated according to the finger opening degree, the finger curvature degree and the hand contour information. The virtual reality gesture operation model generation method comprises the steps of generating a virtual reality gesture operation model by using a preset modeling algorithm or function, and taking the degree of opening and closing of fingers, the degree of bending of the fingers and the hand contour information as input parameters to obtain corresponding output, wherein the output is the generated virtual reality gesture operation model; and a virtual reality gesture operation model corresponding to the finger opening degree, the finger curvature and the human hand contour information of the gesture operation signal can be searched in a preset gesture operation model set.
For example, as shown in fig. 2, where a and B are the finger curvatures of the ring finger and the little finger of the user, and C is the finger opening degree between the ring finger and the little finger, the virtual reality gesture operation model can be generated by obtaining the finger curvature of each finger and the finger opening degree between two adjacent fingers, and then adding the contour model of the hand (such as the length and thickness of each finger).
The image control module 30 is configured to synthesize the virtual reality gesture operation model and the target object stereoscopic image to generate a target stereoscopic image, perform a control operation on the target stereoscopic image based on the control parameter, and display the target stereoscopic image in the virtual reality display scene.
Specifically, the method includes synthesizing a target object stereo image and a virtual reality gesture operation model, wherein the synthesized object is the target stereo image, controlling the target stereo image based on control parameters (such as an operation control angle, an operation control distance and the like) carried by a gesture operation signal, and displaying the target stereo image after the operation is completed, wherein the control operation includes at least one of folding operation, translation operation and rotation operation.
For example, as shown in fig. 3, the user a1 is hanging a horizontal bar, and after synthesizing a1 with the gesture operation model, controls an operation, such as a flipping operation, based on a sliding movement of the hand, thereby controlling a1 to stand upside down on the horizontal bar.
Optionally, the target stereo image after the control operation may be displayed according to a preset display effect, for example, according to a preset display scale, a preset display color, or a preset display brightness.
Optionally, as shown in fig. 6, the control parameters include an operation control angle and an operation control distance;
the device further comprises: the instruction generating module 40 is configured to generate an image control instruction according to the operation control angle and the operation control distance;
the image control module 30 performs a control operation on the target stereo image based on the control parameter, specifically to:
and performing control operation on the target stereo image based on the image control instruction.
Specifically, the image control instruction is used for instructing the image control device to execute control operation, and the instruction carries parameters such as an operation control angle and an operation control distance. For example, if the operation control angle is 90 degrees clockwise and the operation control distance is 0, a rotation instruction for rotating the target stereoscopic image 90 degrees clockwise in place is generated; if the operation control angle is 0 degree and the operation control distance is 10 cm, a movement instruction for translating the target stereoscopic image by 10 cm is generated, and the like.
Optionally, as shown in fig. 6, the apparatus further includes:
the signal judgment module 50 is configured to judge whether the gesture operation signal matches a preset trigger operation signal, and when the gesture operation signal matches the preset trigger operation signal, trigger the parameter acquisition module to acquire the control parameter carried by the gesture operation signal.
Specifically, the image control device matches the acquired gesture operation signal with a preset trigger operation signal, and if the matching is successful, it indicates that the gesture operation signal is used for controlling the target object stereo image.
Optionally, if the preset trigger operation signal is a specific sliding gesture, the specific sliding gesture includes a sliding track and a sliding direction, and after the sliding track of the gesture operation signal is matched with the preset sliding track and the sliding direction of the gesture operation signal is matched with the preset sliding direction, if the matching error is within a preset error range, it is determined that the matching is successful.
And when the gesture operation signal is successfully matched with the preset trigger operation signal, acquiring control parameters carried by the gesture operation signal, such as an operation control angle (rotation angle), an operation control distance (movement distance) and the like.
In the embodiment of the invention, after the image of the target object is acquired by the fixed camera or the rotatable camera, the three-dimensional display is carried out in the virtual reality display scene, when the gesture operation signal input aiming at the three-dimensional image of the target object in the virtual reality display scene is acquired, the control parameter carried by the gesture operation signal is acquired, then the virtual reality gesture operation model is generated according to the gesture operation signal, then the virtual reality gesture operation model is synthesized with the three-dimensional image of the target object to generate the three-dimensional image of the target, the control operation is carried out on the three-dimensional image of the target based on the control parameter, and the three-dimensional image of the target is displayed in the virtual reality display scene. The three-dimensional image of the target object is subjected to operation control such as rotation, translation or folding through the gesture operation signal, so that any angle of each object contained in the original image can be conveniently and quickly displayed, the convenience of operation is improved, and the operability and interestingness are enhanced.
Fig. 7 is a schematic structural diagram of another image control apparatus according to an embodiment of the present invention. As shown in fig. 7, the page content display apparatus 1000 may include: at least one processor 1001, such as a CPU, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 7, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an image control application program.
In the image control apparatus 1000 shown in fig. 7, the user interface 1003 is an interface mainly used for providing input for a user, and acquiring data input by the user; the network interface 1004 is mainly used for data communication with the user terminal; and the processor 1001 may be configured to invoke an image control application stored in the memory 1005 and specifically perform the following operations:
acquiring a gesture operation signal input aiming at a target object stereo image in a virtual reality display scene, and acquiring a control parameter carried by the gesture operation signal;
generating a virtual reality gesture operation model according to the gesture operation signal;
and synthesizing the virtual reality gesture operation model and the target object three-dimensional image to generate a target three-dimensional image, performing control operation on the target three-dimensional image based on the control parameters, and displaying the target three-dimensional image in the virtual reality display scene.
In one embodiment, the gesture operation signal includes a degree of opening and closing of a finger, a degree of bending of the finger, and a human hand contour information, and when the processor 1001 executes the virtual reality gesture operation model generated according to the gesture operation signal, the following operations are specifically executed:
and generating the virtual real-environment gesture operation model according to the finger opening degree, the finger bending degree and the hand contour information.
In one embodiment, the control parameters include an operation control angle and an operation control distance, and the processor 1001 further performs the following steps after performing the acquisition of the gesture operation signal input for the stereoscopic image of the target object in the virtual reality display scene:
generating an image control instruction according to the operation control angle and the operation control distance;
when the processor 1001 performs the control operation on the target stereoscopic image based on the control parameter, the following operations are specifically performed:
and performing control operation on the target stereo image based on the image control instruction.
In one embodiment, before performing the step of acquiring the control parameter carried by the gesture operation signal, the processor 1001 further performs the following steps:
judging whether the gesture operation signal is matched with a preset trigger operation signal or not;
and if the gesture operation signal is matched with the preset trigger operation signal, triggering and executing the step of acquiring the control parameters carried by the gesture operation signal.
In one embodiment, the control operation includes at least one of a folding operation, a translation operation, and a rotation operation.
In the embodiment of the invention, after the image of the target object is acquired by the fixed camera or the rotatable camera, the three-dimensional display is carried out in the virtual reality display scene, when the gesture operation signal input aiming at the three-dimensional image of the target object in the virtual reality display scene is acquired, the control parameter carried by the gesture operation signal is acquired, then the virtual reality gesture operation model is generated according to the gesture operation signal, then the virtual reality gesture operation model is synthesized with the three-dimensional image of the target object to generate the three-dimensional image of the target, the control operation is carried out on the three-dimensional image of the target based on the control parameter, and the three-dimensional image of the target is displayed in the virtual reality display scene. The three-dimensional image of the target object is subjected to operation control such as rotation, translation or folding through the gesture operation signal, so that any angle of each object contained in the original image can be conveniently and quickly displayed, the convenience of operation is improved, and the operability and interestingness are enhanced.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (8)

1. An image control method, comprising:
acquiring a gesture operation signal input aiming at a target object stereo image in a virtual reality display scene, and judging whether the gesture operation signal is matched with a preset trigger operation signal or not, wherein the preset trigger operation signal comprises a sliding track and a sliding direction, if the gesture operation signal is matched with the preset trigger operation signal, acquiring a control parameter carried by the gesture operation signal, and if the gesture operation signal is matched with the preset trigger operation signal, the gesture operation signal comprises the following steps: when the sliding track of the gesture operation signal is matched with a preset sliding track, the sliding direction of the gesture operation signal is matched with a preset sliding direction, and the matching error is within a preset error range, the matching is determined to be successful;
generating a virtual reality gesture operation model according to the gesture operation signal;
and synthesizing the virtual reality gesture operation model and the target object stereo image to generate a target stereo image, performing control operation on the target stereo image based on the control parameters, and displaying the target stereo image in the virtual reality display scene according to a preset display effect, wherein the preset display effect comprises a preset display scale, a preset display color or a preset display brightness.
2. The method of claim 1, wherein the gesture operation signal includes a degree of finger opening, a degree of finger curvature, and hand contour information;
the generating of the virtual reality gesture operation model according to the gesture operation signal comprises:
and generating the virtual real-environment gesture operation model according to the finger opening degree, the finger bending degree and the hand contour information.
3. The method of claim 1, wherein the control parameters include an operational control angle and an operational control distance;
after the gesture operation signal input aiming at the target object stereo image in the virtual reality display scene is collected, the method further comprises the following steps:
generating an image control instruction according to the operation control angle and the operation control distance;
the control operation on the target stereo image based on the control parameter comprises:
and performing control operation on the target stereo image based on the image control instruction.
4. The method of any of claims 1-3, wherein the control operation comprises at least one of a folding operation, a translation operation, and a rotation operation.
5. An image control apparatus, characterized by comprising:
the parameter acquisition module is used for acquiring a gesture operation signal input aiming at a target object stereo image in a virtual reality display scene, and judging whether the gesture operation signal is matched with a preset trigger operation signal, wherein the preset trigger operation signal comprises a sliding track and a sliding direction, if the gesture operation signal is matched with the preset trigger operation signal, a control parameter carried by the gesture operation signal is acquired, and if the gesture operation signal is matched with the preset trigger operation signal, the parameter acquisition module comprises: when the sliding track of the gesture operation signal is matched with a preset sliding track, the sliding direction of the gesture operation signal is matched with a preset sliding direction, and the matching error is within a preset error range, the matching is determined to be successful;
the model generating module is used for generating a virtual reality gesture operation model according to the gesture operation signal;
and the image control module is used for synthesizing the virtual reality gesture operation model and the target object stereo image to generate a target stereo image, controlling and operating the target stereo image based on the control parameters, and displaying the target stereo image in the virtual reality display scene according to a preset display effect, wherein the preset display effect comprises a preset display proportion, a preset display color or a preset display brightness.
6. The apparatus of claim 5, wherein the gesture operation signal includes a degree of finger opening, a degree of finger curvature, and hand contour information;
the model generation module is specifically configured to:
and generating the virtual real-environment gesture operation model according to the finger opening degree, the finger bending degree and the hand contour information.
7. The apparatus of claim 5, wherein the control parameters include an operational control angle and an operational control distance;
the device further comprises:
the instruction generating module is used for generating an image control instruction according to the operation control angle and the operation control distance;
the image control module performs control operation on the target stereo image based on the control parameter, and is specifically configured to:
and performing control operation on the target stereo image based on the image control instruction.
8. The apparatus of any of claims 5-7, wherein the control operation comprises at least one of a folding operation, a translation operation, and a rotation operation.
CN201610932769.6A 2016-10-31 2016-10-31 Image control method and device Active CN106502401B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610932769.6A CN106502401B (en) 2016-10-31 2016-10-31 Image control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610932769.6A CN106502401B (en) 2016-10-31 2016-10-31 Image control method and device

Publications (2)

Publication Number Publication Date
CN106502401A CN106502401A (en) 2017-03-15
CN106502401B true CN106502401B (en) 2020-01-10

Family

ID=58319566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610932769.6A Active CN106502401B (en) 2016-10-31 2016-10-31 Image control method and device

Country Status (1)

Country Link
CN (1) CN106502401B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109656432B (en) * 2017-10-10 2022-09-13 腾讯科技(深圳)有限公司 Control method, device, equipment and storage medium in virtual reality environment
CN108632373B (en) * 2018-05-09 2021-11-30 方超 Equipment control method and system
CN110187820A (en) * 2019-05-09 2019-08-30 浙江开奇科技有限公司 Display control method and mobile terminal for digital guide to visitors
CN111640206A (en) * 2020-06-08 2020-09-08 上海商汤智能科技有限公司 Dynamic control method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103415825A (en) * 2010-12-29 2013-11-27 汤姆逊许可公司 System and method for gesture recognition
CN104216517A (en) * 2014-08-25 2014-12-17 联想(北京)有限公司 Information processing method and electronic equipment
CN104765459A (en) * 2015-04-23 2015-07-08 无锡天脉聚源传媒科技有限公司 Virtual operation implementation method and device
CN105912110A (en) * 2016-04-06 2016-08-31 北京锤子数码科技有限公司 Method, device and system for performing target selection in virtual reality space

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01255052A (en) * 1988-04-05 1989-10-11 Fujitsu Ltd System for processing memory dump layout output

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103415825A (en) * 2010-12-29 2013-11-27 汤姆逊许可公司 System and method for gesture recognition
CN104216517A (en) * 2014-08-25 2014-12-17 联想(北京)有限公司 Information processing method and electronic equipment
CN104765459A (en) * 2015-04-23 2015-07-08 无锡天脉聚源传媒科技有限公司 Virtual operation implementation method and device
CN105912110A (en) * 2016-04-06 2016-08-31 北京锤子数码科技有限公司 Method, device and system for performing target selection in virtual reality space

Also Published As

Publication number Publication date
CN106502401A (en) 2017-03-15

Similar Documents

Publication Publication Date Title
TWI751161B (en) Terminal equipment, smart phone, authentication method and system based on face recognition
WO2020177582A1 (en) Video synthesis method, model training method, device and storage medium
WO2018077206A1 (en) Augmented reality scene generation method, device, system and equipment
CN102959616B (en) Interactive reality augmentation for natural interaction
WO2015188614A1 (en) Method and device for operating computer and mobile phone in virtual world, and glasses using same
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
KR20190101834A (en) Electronic device for displaying an avatar performed a motion according to a movement of a feature point of a face and method of operating the same
CN106502401B (en) Image control method and device
CN107831902B (en) Motion control method and device, storage medium and terminal
CN105374251A (en) Mine virtual reality training system based on immersion type input and output equipment
JP7070435B2 (en) Information processing equipment, information processing methods, and programs
CN104656893B (en) The long-distance interactive control system and method in a kind of information physical space
CN109144252B (en) Object determination method, device, equipment and storage medium
CN113262465A (en) Virtual reality interaction method, equipment and system
US20140071044A1 (en) Device and method for user interfacing, and terminal using the same
EP4248413A1 (en) Multiple device sensor input based avatar
CN110568923A (en) unity 3D-based virtual reality interaction method, device, equipment and storage medium
CN109840946B (en) Virtual object display method and device
US20190302880A1 (en) Device for influencing virtual objects of augmented reality
Chen et al. A case study of security and privacy threats from augmented reality (ar)
CN112581571A (en) Control method and device of virtual image model, electronic equipment and storage medium
CN112190921A (en) Game interaction method and device
WO2017061890A1 (en) Wireless full body motion control sensor
CN108537149B (en) Image processing method, image processing device, storage medium and electronic equipment
CN111901518B (en) Display method and device and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant