CN114339039A - Virtual photographing method and device based on gesture recognition, electronic equipment and medium - Google Patents

Virtual photographing method and device based on gesture recognition, electronic equipment and medium Download PDF

Info

Publication number
CN114339039A
CN114339039A CN202111600507.7A CN202111600507A CN114339039A CN 114339039 A CN114339039 A CN 114339039A CN 202111600507 A CN202111600507 A CN 202111600507A CN 114339039 A CN114339039 A CN 114339039A
Authority
CN
China
Prior art keywords
gesture
finger
determining
recognizing
photographing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111600507.7A
Other languages
Chinese (zh)
Inventor
廖加威
谢佳晟
张毅
张艺媛
刘瑜
黄熙
任晓华
黄晓琳
赵慧斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111600507.7A priority Critical patent/CN114339039A/en
Publication of CN114339039A publication Critical patent/CN114339039A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a virtual photographing method and device based on gesture recognition, electronic equipment and a medium, and relates to the technical field of computers, in particular to the technical field of virtual reality. The method comprises the following steps: monitoring the gesture of a user under the condition that the virtual environment starts a gesture photographing mode; determining a viewing area according to the position of the gesture; recognizing the gesture to determine a matched photographing operation; and executing the matched photographing operation. The technical scheme simplifies the photographing process, a series of operations such as selection, clicking and projection of the movable handle are not required to be performed by a user, VR photographing can be easily completed through gestures, and the photographing method is more convenient, rapid and intelligent, realizes a super-realistic photographing mode, and enhances user experience.

Description

Virtual photographing method and device based on gesture recognition, electronic equipment and medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a virtual photographing method and apparatus based on gesture recognition, an electronic device, and a medium.
Background
Currently, in VR (Virtual Reality) social applications, a series of operations are usually required to implement a photographing function. Firstly, a user clicks a shooting icon on an interface to pop up a shooting dialog box to display a framing picture, then a handle is moved to enable rays to be projected to the bottom area of the dialog box, the dialog box can be moved by pressing a trigger key for a long time to select different framing pictures, and finally a shutter button is clicked, so that shooting is completed.
Above-mentioned flow of shooing among the VR social application, operation experience is like in grafting the physics mode of shooing with the real world to the VR environment, and the mode of shooing does not change, and the operation is not simplified yet, and the user does not have super realistic's experience at all, and inefficiency and intelligent inadequately.
Disclosure of Invention
The disclosure provides a virtual photographing method and device based on gesture recognition, an electronic device, a storage medium and a computer program product.
According to an aspect of the present disclosure, a virtual photographing method based on gesture recognition is provided, including:
monitoring the gesture of a user under the condition that the virtual environment starts a gesture photographing mode;
determining a viewing area according to the position of the gesture;
recognizing the gesture to determine a matched photographing operation;
and executing the matched photographing operation.
According to another aspect of the present disclosure, there is provided a virtual photographing apparatus based on gesture recognition, including:
the monitoring module is used for monitoring the gesture of the user under the condition that the gesture photographing mode is started in the virtual environment;
the framing module is used for determining a framing area according to the position of the gesture;
the recognition module is used for recognizing the action of the gesture to determine the matched photographing operation;
and the photographing module is used for executing the matched photographing operation.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method in any embodiment of the disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform a method in any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising computer programs/instructions which, when executed by a processor, implement the method in any of the embodiments of the present disclosure.
The technical scheme of the embodiment of the disclosure is based on gesture recognition to photograph in the virtual environment, simplifies the photographing process, does not need a user to execute a series of operations such as selection, clicking and movable handle projection, and can easily finish VR photographing through gestures, so that the method is more convenient, faster and more intelligent, realizes a super-realistic photographing mode, and enhances the user experience.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of a virtual photo method based on gesture recognition according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a virtual photo method based on gesture recognition according to an embodiment of the present disclosure;
FIG. 3 is a block diagram of a virtual camera device based on gesture recognition according to an embodiment of the present disclosure;
FIG. 4 is a block diagram of an identification module in an embodiment in accordance with the present disclosure;
fig. 5 is a block diagram of an electronic device for implementing a virtual photographing method based on gesture recognition according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The technical scheme of the embodiment of the disclosure is applied to VR scenes, especially scenes for photographing in VR virtual reality environment. For the technology of directly grafting the physical photographing mode of the real world into the VR virtual environment, the photographing mode is unchanged, the operation is not simplified, the user has no over-reality experience, and the method is low in efficiency and not intelligent enough. The technical scheme provided by the embodiment of the disclosure is used for photographing in the virtual environment based on gesture recognition, so that the photographing process is simplified, a series of operations such as selection, clicking and handle projection are not required to be performed by a user, VR photographing can be easily completed through gestures, the method is more convenient, rapid and intelligent, a super-realistic photographing mode is realized, and the user experience is enhanced.
Fig. 1 is a schematic diagram of a virtual photographing method based on gesture recognition in an embodiment of the present disclosure. As shown in fig. 1, the method includes:
s101: monitoring the gesture of a user under the condition that the virtual environment starts a gesture photographing mode;
s102: determining a viewing area according to the position of the gesture;
s103: recognizing the gesture to determine the matched photographing operation;
s104: and executing the matched photographing operation.
In one embodiment, the step S102 may include:
and determining the area framed by the connecting line of the finger positions in the gesture as a viewing area.
The mode of framing the framing area based on the finger position connecting line frame is simple to realize, convenient and fast, framing can be finished only by identifying the finger position of a user, and user experience is improved.
In one embodiment, the step S103 may include at least one of the following:
recognizing gesture actions as pinch actions of fingers of two hands, and determining the operation of pressing a shutter in a matched mode;
recognizing the gesture as a pinching action of a finger of a single hand, and determining the operation of matching and switching the directions of the cameras; or,
and recognizing gesture actions, namely clicking a trigger button in a view area by a finger to determine the operation of matching and replacing the filter effect.
The mode of identifying the gesture action to determine the corresponding photographing operation can be realized through the preset corresponding relation between the gesture action and the photographing operation, and the matching photographing operation can be obtained by searching the corresponding relation after the gesture action is identified, so that the purpose of determining the photographing operation based on the gesture identification is achieved, the photographing operation can be directly triggered, the VR photographing process is greatly simplified, and the user experience is enhanced.
In one embodiment, the method may further include:
monitoring that the gesture changes after the viewing area is determined;
and recognizing the changed gesture as the movement of the finger position, and determining a new viewing area according to the moved finger position.
This kind of through monitoring gesture change and based on the mode of the position update viewing area after the finger removes, very convenience of customers adjusts viewing area, and light removal finger can realize, and is simple swift, and is more intelligent, has strengthened user's experience.
In an embodiment, the recognizing the changed gesture motion as the movement of the finger position, and determining a new viewing area according to the moved finger position may include at least one of:
recognizing the changed gesture as the movement of the finger position, and determining a viewing area with the changed shape according to the moved finger position;
recognizing the changed gesture as finger movement for shortening the distance between the two hands, and determining a reduced view finding area according to the position of the moved finger;
recognizing the changed gesture motion as finger movement for expanding the space between the two hands, and determining an expanded view finding area according to the position of the moved finger;
recognizing the changed gesture motion as finger movement for zooming in the distance between the gesture and the user, and determining a view finding area with the reduced object distance according to the position of the moved finger; or,
and recognizing the changed gesture motion as finger movement for zooming out and separating the gesture motion from the user, and determining a view finding area with the increased object distance according to the position of the moved finger.
The above-mentioned several kinds of modes homoenergetic realize that the finger removes to trigger the change of framing region, and different gesture actions can reach the effect that changes the size, shape and the object distance of framing region, have realized that the super reality in the VR environment is shot, and convenient and fast is more intelligent, has strengthened user's experience.
According to the method provided by the embodiment of the disclosure, the photo is taken in the virtual environment based on gesture recognition, the photo taking process is simplified, a series of operations such as selection, clicking and handle projection are not required to be performed by a user, VR photo taking can be easily completed through gestures, the method is more convenient, rapid and intelligent, an over-real photo taking mode is realized, and user experience is enhanced.
Fig. 2 is a schematic diagram of a virtual photo-taking method based on gesture recognition in an embodiment of the present disclosure. As shown in fig. 2, the method includes:
s201: monitoring the gesture of a user under the condition that the virtual environment starts a gesture photographing mode;
the gesture photographing mode can be started in advance through a setting function, and when a user wants to experience VR photographing, the photographing mode can be modified into the gesture photographing mode in the setting.
S202: determining a region framed by the finger position connecting lines in the gesture as a viewing region;
illustratively, the user uses two hands to make a gesture, including the thumb and index finger of the left hand and the thumb and index finger of the right hand, so that the positions of the fingertips of the four fingers can be sequentially connected to obtain a framed area, which is then determined as a viewing area.
In the embodiments of the present disclosure, the shape of the viewing area may be various, including but not limited to: rectangular, square, triangular, star-shaped, heart-shaped, etc. The number of fingers used by the user gesture, and whether the left hand, the right hand or the two hands are specifically used are not limited, and different photographing operations can be triggered by different gestures. The user can obtain the viewing area of different shapes through using the finger of different figure, uses very nimble, and the user can customize as required, experiences betterly.
S203: recognizing the gesture as that the finger clicks a trigger button in the view area to determine the operation of matching and replacing the filter effect;
illustratively, the viewing area may be displayed on an interface in the VR environment, and the trigger button may be disposed anywhere on the interface. In order to facilitate the user operation, the touch screen can be arranged at a position close to the thumb of the user, so that the user can easily complete the clicking operation of the trigger button by only using one thumb under the gesture of keeping the view area, and further complete the operation of better filter effect.
S204: executing the operation of replacing the filter effect;
in the embodiment of the present disclosure, the steps S203 to S204 may be replaced by the following steps:
and recognizing that the gesture is a pinch action of a single finger, determining the operation of matching and switching the direction of the camera, and executing the operation of switching the direction of the camera.
The switching of the camera direction refers to switching from a non-self-photographing direction to a self-photographing direction, or switching from the self-photographing direction to the non-self-photographing direction, namely the front and back directions of the camera. The user can realize the switching of camera direction through the action of kneading of one hand finger, and is simple and easy, and user experience is good.
S205: monitoring that the gesture changes;
in the embodiment of the disclosure, the gesture of the user in the VR photographing process can be changed at any time, so that different photographing operations can be continuously triggered, such as changing a viewing area, switching a camera direction, changing a filter effect, pressing a shutter and the like, all realized through gesture recognition, and an effect of VR super-reality photographing can be achieved.
S206: recognizing the changed gesture action as a pinching action of fingers of two hands, and determining the operation of pressing a shutter in a matching manner;
s207: the above-described shutter-pressing operation is performed.
In the embodiment of the present disclosure, the recognized gesture after change may be various, and the above steps S206 to S207 are only one of the implementation manners, for example, the gesture after change may also be recognized as a finger position movement, so as to trigger an operation of determining a new viewing area. Therefore, the above steps S206-S207 can also be replaced by the following steps:
and recognizing the changed gesture as the movement of the finger position, and determining a new viewing area according to the moved finger position.
Further, the recognizing the changed gesture motion as the movement of the finger position, and determining a new viewing area according to the moved finger position may include at least one of:
recognizing the changed gesture as the movement of the finger position, and determining a viewing area with the changed shape according to the moved finger position;
recognizing the changed gesture as finger movement for shortening the distance between the two hands, and determining a reduced view finding area according to the position of the moved finger;
recognizing the changed gesture motion as finger movement for expanding the space between the two hands, and determining an expanded view finding area according to the position of the moved finger;
recognizing the changed gesture motion as finger movement for zooming in the distance between the gesture and the user, and determining a view finding area with the reduced object distance according to the position of the moved finger; or,
and recognizing the changed gesture motion as finger movement for zooming out and separating the gesture motion from the user, and determining a view finding area with the increased object distance according to the position of the moved finger.
In the embodiment of the present disclosure, the object distance refers to a distance between the user and the object to be photographed, and the object distance is decreased when the finger is pulled close, that is, the distance between the user and the object to be photographed is decreased; when the finger is far away, the object distance of the corresponding object is increased, namely the distance between the user and the photographed object is increased, so that the switching between close-range shooting and long-range shooting is realized.
According to the method provided by the embodiment of the disclosure, on the basis of gesture recognition of a virtual environment, different photographing operations triggered by different gestures are realized by monitoring gesture changes, a photographing process is simplified, a series of operations such as selection, clicking and movable handle projection are not required to be executed by a user, VR photographing can be easily completed through the gestures, and the method is more convenient, quicker and more intelligent, realizes an over-real photographing mode, and enhances user experience.
FIG. 3 is a block diagram of a virtual camera device based on gesture recognition according to an embodiment of the present disclosure. As shown in fig. 3, the apparatus includes:
the monitoring module 301 is configured to monitor a gesture of a user when the gesture photographing mode is started in the virtual environment;
a view finding module 302, configured to determine a view finding area according to a position of the gesture;
the recognition module 303 is used for recognizing the gesture to determine the matched photographing operation;
and the photographing module 304 is used for executing the matched photographing operation.
In one embodiment, the viewfinder module 302 may be used to:
and determining the area framed by the connecting line of the finger positions in the gesture as a viewing area.
FIG. 4 is a block diagram of an identification module in an embodiment in accordance with the present disclosure. As shown in fig. 4, in one embodiment, the identification module may include at least one of the following:
a first recognition unit 401, configured to recognize that the gesture is a pinch motion of fingers of both hands, and determine to match an operation of pressing a shutter;
the second recognition unit 402 is used for recognizing that the gesture is a pinch action of a finger of a single hand, and determining the operation of matching and switching the direction of the camera; or,
and a third recognition unit 403, configured to recognize that the gesture is a finger clicking a trigger button in the viewing area, and determine an operation matching the filter replacement effect.
In one embodiment, the listening module 301 may be further configured to: after the view finding module determines a view finding area, monitoring that the gesture changes; the identification module 303 may also be configured to: and recognizing the changed gesture as the movement of the finger position, and determining a new viewing area according to the moved finger position.
In one embodiment, the identification module 303 may be specifically configured to at least one of the following when determining the new viewing area:
recognizing the changed gesture as the movement of the finger position, and determining a viewing area with the changed shape according to the moved finger position;
recognizing the changed gesture as finger movement for shortening the distance between the two hands, and determining a reduced view finding area according to the position of the moved finger;
recognizing the changed gesture motion as finger movement for expanding the space between the two hands, and determining an expanded view finding area according to the position of the moved finger;
recognizing the changed gesture motion as finger movement for zooming in the distance between the gesture and the user, and determining a view finding area with the reduced object distance according to the position of the moved finger;
and recognizing the changed gesture motion as finger movement for zooming out and separating the gesture motion from the user, and determining a view finding area with the increased object distance according to the position of the moved finger.
The apparatus provided in the embodiment of the present disclosure may be configured to execute the method provided in any one of the method embodiments, and specific processes are described in the method embodiments and are not described herein again.
The above-mentioned device that this disclosed embodiment provided shoots in virtual environment based on gesture recognition, has simplified the flow of shooing, need not the user and carries out a series of operations such as selection, click and removal handle throw, just can easily accomplish the VR through the gesture and shoot, and convenient and fast, intelligence more have realized the mode of shooing of super reality, have strengthened user's experience.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 comprises a computing unit 501 which may perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 501 executes the various methods and processes described above, such as a virtual photo method based on gesture recognition. For example, in some embodiments, the virtual photography method based on gesture recognition may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into the RAM503 and executed by the computing unit 501, one or more steps of the virtual photo taking method based on gesture recognition described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform a virtual photo method based on gesture recognition by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (13)

1. A virtual photographing method based on gesture recognition comprises the following steps:
monitoring the gesture of a user under the condition that the virtual environment starts a gesture photographing mode;
determining a viewing area according to the position of the gesture;
recognizing the gesture to determine a matched photographing operation;
and executing the matched photographing operation.
2. The method of claim 1, wherein the determining a viewing area from the location of the gesture comprises:
and determining the area framed by the finger position connecting lines in the gesture as a viewing area.
3. The method of claim 1, wherein the act of recognizing the gesture determines a matching photo operation comprising at least one of:
recognizing the gesture as a pinching action of fingers of two hands, and determining the operation of pressing a shutter in a matching manner;
recognizing the gesture as a pinching action of a finger of a single hand, and determining the operation of matching and switching the direction of the camera; or,
and recognizing the gesture as clicking a trigger button in the view area by a finger to determine the operation of matching and replacing the filter effect.
4. The method of claim 1, further comprising:
monitoring that the gesture changes after the viewing area is determined;
and recognizing the changed gesture as the movement of the finger position, and determining a new viewing area according to the moved finger position.
5. The method of claim 4, wherein the recognizing the changed gesture motion as a movement of a finger position, the determining a new viewing area from the moved finger position, comprises at least one of:
recognizing the changed gesture as the movement of the finger position, and determining a viewing area with the changed shape according to the moved finger position;
recognizing the changed gesture as finger movement for shortening the distance between the two hands, and determining a reduced view finding area according to the position of the moved finger;
recognizing the changed gesture motion as finger movement for expanding the space between the two hands, and determining an expanded view finding area according to the position of the moved finger;
recognizing the changed gesture motion as finger movement for zooming in the distance between the gesture and the user, and determining a view finding area with the reduced object distance according to the position of the moved finger; or,
and recognizing the changed gesture motion as finger movement for zooming out and separating the gesture motion from the user, and determining a view finding area with the increased object distance according to the position of the moved finger.
6. A virtual photographing device based on gesture recognition comprises:
the monitoring module is used for monitoring the gesture of the user under the condition that the gesture photographing mode is started in the virtual environment;
the framing module is used for determining a framing area according to the position of the gesture;
the recognition module is used for recognizing the action of the gesture to determine the matched photographing operation;
and the photographing module is used for executing the matched photographing operation.
7. The apparatus of claim 6, wherein the view finding module is to:
and determining the area framed by the finger position connecting lines in the gesture as a viewing area.
8. The apparatus of claim 6, wherein the identification module comprises at least one of:
the first recognition unit is used for recognizing that the gesture is a pinching motion of fingers of two hands and determining the operation of pressing a shutter in a matching manner;
the second recognition unit is used for recognizing that the gesture is a pinching motion of a finger of a single hand and determining the operation of matching and switching the direction of the camera; or,
and the third identification unit is used for identifying that the gesture is performed by clicking a trigger button in the view area with a finger to determine the operation of matching and replacing the filter effect.
9. The apparatus of claim 6, the listening module further to:
after the view finding module determines the view finding area, monitoring that the gesture changes;
the identification module is further configured to:
and recognizing the changed gesture as the movement of the finger position, and determining a new viewing area according to the moved finger position.
10. The apparatus of claim 9, wherein the identification module is to at least one of:
recognizing the changed gesture as the movement of the finger position, and determining a viewing area with the changed shape according to the moved finger position;
recognizing the changed gesture as finger movement for shortening the distance between the two hands, and determining a reduced view finding area according to the position of the moved finger;
recognizing the changed gesture motion as finger movement for expanding the space between the two hands, and determining an expanded view finding area according to the position of the moved finger;
recognizing the changed gesture motion as finger movement for zooming in the distance between the gesture and the user, and determining a view finding area with the reduced object distance according to the position of the moved finger;
and recognizing the changed gesture motion as finger movement for zooming out and separating the gesture motion from the user, and determining a view finding area with the increased object distance according to the position of the moved finger.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.
13. A computer program product comprising a computer program/instructions which, when executed by a processor, implement the method of any one of claims 1-5.
CN202111600507.7A 2021-12-24 2021-12-24 Virtual photographing method and device based on gesture recognition, electronic equipment and medium Pending CN114339039A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111600507.7A CN114339039A (en) 2021-12-24 2021-12-24 Virtual photographing method and device based on gesture recognition, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111600507.7A CN114339039A (en) 2021-12-24 2021-12-24 Virtual photographing method and device based on gesture recognition, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN114339039A true CN114339039A (en) 2022-04-12

Family

ID=81012551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111600507.7A Pending CN114339039A (en) 2021-12-24 2021-12-24 Virtual photographing method and device based on gesture recognition, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN114339039A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103259978A (en) * 2013-05-20 2013-08-21 邱笑难 Method for photographing by utilizing gesture
CN104020843A (en) * 2013-03-01 2014-09-03 联想(北京)有限公司 Information processing method and electronic device
CN106341603A (en) * 2016-09-29 2017-01-18 网易(杭州)网络有限公司 View finding method for virtual reality environment, device and virtual reality device
CN106845335A (en) * 2016-11-29 2017-06-13 歌尔科技有限公司 Gesture identification method, device and virtual reality device for virtual reality device
CN107479712A (en) * 2017-08-18 2017-12-15 北京小米移动软件有限公司 information processing method and device based on head-mounted display apparatus
CN109032358A (en) * 2018-08-27 2018-12-18 百度在线网络技术(北京)有限公司 The control method and device of AR interaction dummy model based on gesture identification
CN113325954A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Method, apparatus, device, medium and product for processing virtual objects

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104020843A (en) * 2013-03-01 2014-09-03 联想(北京)有限公司 Information processing method and electronic device
CN103259978A (en) * 2013-05-20 2013-08-21 邱笑难 Method for photographing by utilizing gesture
CN106341603A (en) * 2016-09-29 2017-01-18 网易(杭州)网络有限公司 View finding method for virtual reality environment, device and virtual reality device
CN106845335A (en) * 2016-11-29 2017-06-13 歌尔科技有限公司 Gesture identification method, device and virtual reality device for virtual reality device
CN107479712A (en) * 2017-08-18 2017-12-15 北京小米移动软件有限公司 information processing method and device based on head-mounted display apparatus
CN109032358A (en) * 2018-08-27 2018-12-18 百度在线网络技术(北京)有限公司 The control method and device of AR interaction dummy model based on gesture identification
CN113325954A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Method, apparatus, device, medium and product for processing virtual objects

Similar Documents

Publication Publication Date Title
JP6986187B2 (en) Person identification methods, devices, electronic devices, storage media, and programs
US20200202121A1 (en) Mode-changeable augmented reality interface
KR20220053670A (en) Target-object matching method and apparatus, electronic device and storage medium
CN109902738B (en) Network module, distribution method and device, electronic equipment and storage medium
KR20220009965A (en) Network training method and apparatus, target detection method and apparatus, and electronic device
CN111967297B (en) Image semantic segmentation method and device, electronic equipment and medium
CN111401230B (en) Gesture estimation method and device, electronic equipment and storage medium
CN111968631B (en) Interaction method, device, equipment and storage medium of intelligent equipment
CN113359995B (en) Man-machine interaction method, device, equipment and storage medium
CN113325954B (en) Method, apparatus, device and medium for processing virtual object
CN110796094A (en) Control method and device based on image recognition, electronic equipment and storage medium
CN112597944B (en) Key point detection method and device, electronic equipment and storage medium
WO2022111458A1 (en) Image capture method and apparatus, electronic device, and storage medium
CN111784757A (en) Training method of depth estimation model, depth estimation method, device and equipment
CN115511779B (en) Image detection method, device, electronic equipment and storage medium
CN114743196A (en) Neural network for text recognition, training method thereof and text recognition method
JP7242812B2 (en) Image recognition method, device and electronic device
CN110032418A (en) Screenshot method, screenshot system, terminal equipment and computer-readable storage medium
CN112613447B (en) Key point detection method and device, electronic equipment and storage medium
CN110892371B (en) Display control method and terminal
WO2023246296A1 (en) Automatic zooming method and apparatus, and self-timer and storage medium
CN110941987B (en) Target object identification method and device, electronic equipment and storage medium
CN112837372A (en) Data generation method and device, electronic equipment and storage medium
CN114339039A (en) Virtual photographing method and device based on gesture recognition, electronic equipment and medium
CN113327311B (en) Virtual character-based display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination