WO2020001015A1 - 场景操控的方法、装置及电子设备 - Google Patents
场景操控的方法、装置及电子设备 Download PDFInfo
- Publication number
- WO2020001015A1 WO2020001015A1 PCT/CN2019/073076 CN2019073076W WO2020001015A1 WO 2020001015 A1 WO2020001015 A1 WO 2020001015A1 CN 2019073076 W CN2019073076 W CN 2019073076W WO 2020001015 A1 WO2020001015 A1 WO 2020001015A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- scene
- action
- trigger object
- area
- present disclosure
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Definitions
- the present disclosure relates to the field of image processing, and in particular, to a method, device, and electronic device for scene manipulation.
- gestures As a natural and intuitive way of communication, are an important part of human-computer interaction.
- science and technology and the increasing popularity of computer vision, people have higher and higher requirements for the naturalness of human-computer interaction.
- the traditional mouse-and-keyboard-based interaction methods have shown their limitations and new human-computer interaction methods. Become a research hotspot.
- gestures are an efficient way for human-computer interaction and device control. Vision-based gesture recognition is a challenging research topic in the fields of human-computer interaction and pattern recognition.
- 3D depth-based camera 3D scanning equipment is large in size, high in hardware cost, required higher computing power, and difficult to integrate and apply to popular smart terminals.
- controlling a scene needs to be based on a specific complex device.
- these complex devices are often not installed in the smart phone.
- the control of the scene often involves the movement of the scene perspective. , Interaction with objects in the scene, but can not control the shape of the scene or change the shape of the scene's objects.
- embodiments of the present disclosure provide a method, an apparatus, and an electronic device for scene manipulation, which at least partially solve the problems in the prior art.
- an embodiment of the present disclosure provides a scene manipulation method, including:
- the identification trigger object includes:
- the acquiring characteristic information of the trigger object is specifically:
- the determining the action of the triggering object includes:
- the action of the triggering object is determined based on the first feature information and the second feature information.
- the determining the action of the triggering object includes:
- the action of the triggering object is determined based on the area.
- the determining the action of the triggering object includes:
- a motion of the triggering object is determined.
- obtaining the area of the trigger object includes: setting a minimum rule box so that the trigger object is completely contained in the minimum rule box, calculating the area of the minimum rule box, and obtaining the trigger object. area.
- the trigger object is a human hand.
- the actions include pinching, rotating, moving near and far, or changing gestures.
- an embodiment of the present disclosure further provides a scene manipulation device, including:
- Display module display the first form of the scene
- Identification module used to identify the trigger object
- a judging module for judging the action of the triggering object
- a control module switching the first form of the scene to the second form of the scene based on the action, and the first form of the scene is associated with the second form of the scene.
- the identification module includes:
- a feature information acquisition module used to obtain feature information of the trigger object
- a comparison module used to compare the feature information with standard feature information
- Trigger object judgment module used to identify whether it is a trigger object according to the comparison result.
- the acquiring characteristic information of the trigger object is specifically:
- the judgment module includes:
- a first characteristic information acquisition module configured to acquire first characteristic information of a triggering object
- a second characteristic information acquisition module configured to acquire second characteristic information of a triggering object
- the first action judgment module is configured to judge an action of a triggering object based on the first feature information and the second feature information.
- the judgment module includes:
- Area acquisition module used to obtain the area of the trigger object.
- the judgment module includes:
- a first area acquisition module for acquiring a first area of a trigger object;
- a second area acquisition module for acquiring a second area of a trigger object;
- the second action judgment module is configured to judge the action of the triggering object based on a comparison result between the first area and the second area.
- the area acquisition module includes:
- Rule box setting module set the minimum rule box so that the trigger object is completely contained in the minimum rule box
- Area calculation module Calculate the area of the smallest regular box to get the area of the trigger object.
- the trigger object is a human hand.
- the actions include pinching, rotating, moving near and far, or changing gestures.
- an embodiment of the present disclosure further provides an electronic device including: at least one processor; and,
- a memory connected in communication with the at least one processor; wherein,
- the memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can perform any of the scene manipulations described in the first aspect. method.
- an embodiment of the present disclosure further provides a non-transitory computer-readable storage medium, where the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to cause a computer to execute any one of the claims of the first aspect.
- the method for scene control is not limited to:
- the method, device, electronic device and non-transitory computer-readable storage medium for scene manipulation provided by the embodiments of the present disclosure, wherein the method for scene manipulation: calling corresponding set scene information according to the action of a triggering object, thereby achieving Change the scene or control an element in the scene, and associate the action of the triggering object with the scene. Without the need to upgrade the existing electronic equipment, you can control the scene in the electronic equipment, thereby achieving Cost reduction purposes.
- FIG. 1 is a flowchart of a scene manipulation method according to an embodiment of the present disclosure
- FIG. 2 is a flowchart of identifying a triggering object according to an embodiment of the present disclosure
- FIG. 3 is a flowchart of determining an action of a triggering object according to an embodiment of the present disclosure
- FIG. 4 is a schematic diagram of an action of triggering an object based on area judgment according to an embodiment of the present disclosure
- FIG. 5 is a schematic diagram of setting a minimum rule box according to an embodiment of the present disclosure.
- FIG. 6 is a schematic block diagram of a scene manipulation apparatus according to an embodiment of the present disclosure.
- FIG. 7 is a schematic block diagram of an electronic device according to an embodiment of the present disclosure.
- FIG. 8 is a schematic diagram of a computer-readable storage medium according to an embodiment of the present disclosure.
- FIG. 9 is a schematic block diagram of a terminal according to an embodiment of the present disclosure.
- an embodiment of the present disclosure provides a method for scene manipulation.
- the scene manipulation method includes the following steps:
- S101 Display the first form of the scene.
- the first form of the scene that is, the form before the scene is switched, for example, in a specific application scenario
- the first form of the scene can be a 2D scene, which can be displayed by a mobile phone.
- the first form of the scene can also be a 3D scene, that is, the form before the scene is switched, and the trigger object and the recognition of the first form of the scene are completed. Determine the action of the triggering object.
- step S102 After the trigger object is identified in step S102, it is necessary to determine whether the trigger object has performed a corresponding action, determine whether the action of the trigger object is compared with the saved action data, and determine that the action is specifically the action in the action data.
- S104 Switch the first form of the scene to the second form of the scene based on the action, and the first form of the scene is associated with the second form of the scene.
- step S103 After determining the corresponding action in step S103, it is necessary to call the scene information corresponding to the action and display the called scene information, or use the called scene information to combine with the existing scene information to form new scene information. Or use the recalled scene information to replace an element in the existing scene information, or directly replace the existing scene information with the recalled scene information.
- a scene is a picture composed of various elements. You can manipulate the scene, you can switch between different scenes, or you can manipulate an element in the same scene, such as controlling the distance of an object (such as a ball) in the scene. motion.
- the trigger object is a hand
- the actions of the trigger object are mainly various gestures, such as Zhanghe, rotation, distance movement, or gesture change.
- the scene change in the mobile phone is controlled through gestures.
- the mobile phone displays the real-time picture taken by the camera and is equipped with background music.
- the real-time picture with background music is the first form of the scene, such as a scene with raining 2D maps.
- the human hand is first identified, and then it is judged whether the human hand has made an open movement, that is, from a fist to an open palm. If it is determined that the human hand has made an opening action, then the corresponding scene information is called, such as the rain information in the 3D particles, and the rain information is combined with the currently displayed shooting real-time picture, so as to be on the mobile phone Show the scene of 3D particles of raindrops, and the background music can be selected according to the original settings.
- the scene in which 3D particles of raindrops are displayed on the mobile phone is the second form of the scene.
- the association between the second form of the scene and the first form of the scene is preset.
- the action is to call the corresponding information to transform the scene from the first form to the second form.
- the mobile phone displays a blue sky and white clouds.
- the mobile phone detects that the image captured by the camera contains human hands, that is, after the mobile phone recognizes the human hand, it is next necessary to determine whether the human hand moves forward or backward relative to the mobile phone camera. , Such as judging that the human hand has moved back and forth relative to the mobile phone camera, then call the corresponding scene information to make the cloud move according to the relative movement of the hand, that is, when the hand is close to the camera, the cloud is close to the screen, and when the hand is away When you look at the camera, the clouds are far away from the shield.
- the scene before the white cloud moves is the first form of the scene
- the scene after the white cloud moves is the second form of the scene.
- the mobile phone is only used as an example for easy understanding.
- the disclosure is not limited to mobile phones, and electronic devices with information processing functions, such as tablets and portable computers, can use the technical solutions of the disclosure.
- the use of the hand as a trigger object is merely an exemplary description, and the trigger object is not limited to the hand.
- the trigger object may also be a head or a limb, and the action may be a movement such as shaking a head or blinking.
- identifying the trigger object in step S102 specifically includes:
- the feature information of an object is information that can characterize the object, such as the contour information and keypoint information of the object image.
- the techniques for extracting the contour information and keypoint information of an image in the prior art are relatively mature, and are not repeated here.
- the obtained feature information needs to be compared with the pre-stored feature information.
- the contour or keypoint information of the image needs to be obtained, and then the acquired contour or keypoint information is compared with the pre-stored contour or keypoint information of the human hand. If the comparison results match, the acquired image is considered to be a human hand, that is, the trigger object is identified.
- the contour or keypoint information of the acquired image needs to be compared with the pre-stored contour or keypoint information of the human hand one by one. As long as the comparison result is consistent, the acquired image is identified as a human hand image.
- the acquiring characteristic information of the trigger object is specifically: acquiring a key point on the trigger object.
- the step S103 of determining the action of the triggering object specifically includes:
- Step S301 Acquire first feature information of a trigger object.
- Step S302 Acquire the second characteristic information of the trigger object.
- Step S303 Determine the action of the triggering object based on the first feature information and the second feature information.
- This exemplary description uses key points as characteristic information.
- a human hand for example, in judging a movement of a human hand from a fist to an open motion, it is necessary to first obtain a key point when the human hand is fisting. Then get the key points after the hands are open. Then the key points of the human hand fist are compared with the pre-stored key points of the hand, respectively, so as to determine that the hand is in a state of fist, and then the key points of the opened hand are compared with the pre-stored key points of the hand, so that the hand is in the open position. On state, to determine that the human hand has made an open action.
- determining the action of the triggering object in step S103 further includes: acquiring an area of the triggering object, and determining the action of the triggering object based on the area.
- S403 Determine the action of the triggering object based on a comparison result between the first area and the second area.
- the farther the human hand is from the mobile phone the smaller the area of the human hand when it is phased, and the closer the human hand is to the mobile phone, the more its area is when it is phased. Big. Therefore, according to the calculation of the area of the human hand, the movement of the human hand relative to the mobile phone can be easily determined.
- the obtaining the area of the triggering object specifically includes the following steps:
- the rectangle can just wrap the hand, and the area of the hand is replaced by calculating the area of the rectangle, thereby simplifying the calculation amount.
- a regular rectangle is set.
- the movement of the elements of the scene can also be associated with the length of the side of the rectangle.
- the distance traveled is related to control the trajectory of the cloud according to the side length of the rectangle. Since the change in side length is linear, no jump occurs. Therefore, the movement of the clouds will be relatively smooth, and there will be no jumping movement.
- an embodiment of the present disclosure provides a scene manipulation device, including:
- Display module 600 displaying a first form of a scene
- Identification module 601 used to identify a trigger object
- a judging module 602 configured to judge an action of the triggering object
- the control module 603 is configured to switch the first form of the scene to the second form of the scene based on the action, and the first form of the scene is associated with the second form of the scene.
- the identification module 601 includes:
- Feature information acquisition module 6011 configured to obtain feature information of the trigger object
- Comparison module 6012 configured to compare the feature information with standard feature information
- Trigger object judgment module 6013 used to identify whether it is a trigger object according to the comparison result.
- the acquiring characteristic information of the trigger object is specifically:
- the determining module 602 includes:
- a first characteristic information acquisition module 6021 configured to acquire first characteristic information of a trigger object
- a second characteristic information acquisition module 6022 configured to acquire second characteristic information of the trigger object
- the first action judgment module 6023 is configured to judge an action of a triggering object based on the first feature information and the second feature information.
- the determining module 602 includes:
- Area acquisition module used to obtain the area of the trigger object.
- the determining module 602 includes:
- a first area acquisition module configured to acquire a first area of a triggering object
- a second area acquisition module for acquiring a second area of the triggering object
- a second action judging module based on a comparison result between the first area and the second area,
- the area acquisition module includes:
- Rule box setting module set the minimum rule box so that the trigger object is completely contained in the minimum rule box
- Area calculation module Calculate the area of the smallest regular box to get the area of the trigger object.
- the trigger object is a human hand.
- the actions include pinching, rotating, moving near and far, or changing gestures.
- FIG. 7 is a hardware block diagram of an electronic device according to an embodiment of the present disclosure. As shown in FIG. 7, the electronic device 70 according to an embodiment of the present disclosure includes a memory 71 and a processor 72.
- the memory 71 is configured to store non-transitory computer-readable instructions.
- the memory 71 may include one or more computer program products, and the computer program product may include various forms of computer-readable storage media, such as volatile memory and / or non-volatile memory.
- the volatile memory may include, for example, a random access memory (RAM) and / or a cache memory.
- the non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
- the processor 72 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and / or instruction execution capabilities, and may control other components in the electronic device 70 to perform desired functions.
- the processor 72 is configured to run the computer-readable instructions stored in the memory 71, so that the electronic device 70 performs all or part of the steps of scene manipulation of the foregoing embodiments of the present disclosure.
- this embodiment may also include well-known structures such as a communication bus and an interface. These well-known structures should also be included in the protection scope of the present disclosure. within.
- FIG. 8 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
- a computer-readable storage medium 80 according to an embodiment of the present disclosure has non-transitory computer-readable instructions 81 stored thereon.
- the non-transitory computer-readable instructions 81 are executed by a processor, all or part of the steps of scene manipulation of the foregoing embodiments of the present disclosure are performed.
- the computer-readable storage medium 80 includes, but is not limited to, optical storage media (for example, CD-ROM and DVD), magneto-optical storage media (for example, MO), magnetic storage media (for example, magnetic tape or mobile hard disk), Non-volatile memory rewritable media (for example: memory card) and media with built-in ROM (for example: ROM box).
- optical storage media for example, CD-ROM and DVD
- magneto-optical storage media for example, MO
- magnetic storage media for example, magnetic tape or mobile hard disk
- Non-volatile memory rewritable media for example: memory card
- media with built-in ROM for example: ROM box
- FIG. 9 is a schematic diagram illustrating a hardware structure of a terminal device according to an embodiment of the present disclosure. As shown in FIG. 9, the terminal 90 includes the foregoing embodiment of a scene manipulation device.
- the terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), Mobile terminal devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like, and fixed terminal devices such as digital TVs, desktop computers, and the like.
- PMPs portable multimedia players
- navigation devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like
- fixed terminal devices such as digital TVs, desktop computers, and the like.
- the terminal 90 may further include other components.
- the terminal 90 may include a power supply unit 91, a wireless communication unit 92, an A / V (audio / video) input unit 93, a user input unit 94, a sensing unit 95, an interface unit 96, a controller 97, The output unit 98 and the storage unit 99 and so on.
- FIG. 9 shows a terminal with various components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
- the wireless communication unit 92 allows radio communication between the terminal 90 and a wireless communication system or network.
- the A / V input unit 93 is used to receive audio or video signals.
- the user input unit 94 may generate key input data according to a command input by the user to control various operations of the terminal device.
- the sensing unit 95 detects the current state of the terminal 90, the position of the terminal 90, the presence or absence of a user's touch input to the terminal 90, the orientation of the terminal 90, the acceleration or deceleration movement and direction of the terminal 90, and the like, and generates a signal for controlling the terminal 90's operation command or signal.
- the interface unit 96 functions as an interface through which at least one external device can be connected to the terminal 90.
- the output unit 98 is configured to provide an output signal in a visual, audio, and / or tactile manner.
- the storage unit 99 may store software programs and the like for processing and control operations performed by the controller 97, or may temporarily store data that has been output or is to be output.
- the storage unit 99 may include at least one type of storage medium.
- the terminal 90 may cooperate with a network storage device that performs a storage function of the storage unit 99 through a network connection.
- the controller 97 generally controls the overall operation of the terminal device.
- the controller 97 may include a multimedia module for reproducing or playing back multimedia data.
- the controller 97 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as characters or images.
- the power supply unit 91 receives external power or internal power under the control of the controller 97 and provides appropriate power required to operate each element and component.
- scene manipulation proposed by the present disclosure may be implemented using computer-readable media, such as computer software, hardware, or any combination thereof.
- various embodiments of scene manipulation proposed by the present disclosure can be implemented by using application-specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), At least one of a field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, electronic unit designed to perform the functions described herein is implemented, and in some cases, the present disclosure proposes Various embodiments of scene manipulation can be implemented in the controller 97.
- ASICs application-specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- FPGA field programmable gate array
- processor controller
- microcontroller microcontroller
- microprocessor electronic unit designed to perform the functions described herein
- the various embodiments of scene manipulation proposed by the present disclosure may be implemented with a separate software module allowing at least one function or operation to be performed.
- the software codes may be implemented by a software application (or program) written in any suitable programming language, and the software codes may be stored in the storage unit 99 and executed by the controller 97.
- relational terms such as first and second are used only to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply that any such relationship exists between these entities or operations.
- the block diagrams of the devices, devices, devices, and systems involved in this disclosure are only illustrative examples and are not intended to require or imply that they must be connected, arranged, and configured in the manner shown in the block diagrams. As those skilled in the art would realize, these devices, devices, equipment, and systems may be connected, arranged, and configured in any manner. Words such as “including,” “including,” “having,” and the like are open words, meaning “including, but not limited to,” and can be used interchangeably with them.
- the words “or” and “and” refer to the words “and / or” and are used interchangeably with each other, unless the context clearly indicates otherwise.
- the term “such as” refers to the phrase “such as, but not limited to,” and is used interchangeably with it.
- an "or” used in an enumeration of items beginning with “at least one” indicates a separate enumeration such that, for example, an "at least one of A, B, or C” enumeration means A or B or C, or AB or AC or BC, or ABC (ie A and B and C).
- the word "exemplary” does not mean that the described example is preferred or better than other examples.
- each component or each step can be disassembled and / or recombined.
- These decompositions and / or recombinations should be considered as equivalent solutions of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Psychiatry (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Environmental & Geological Engineering (AREA)
- User Interface Of Digital Computer (AREA)
- Telephone Function (AREA)
Abstract
Description
Claims (12)
- 一种场景操控方法,其特征在于,包括:显示场景的第一形态;识别触发对象;判断所述触发对象的动作;基于所述动作将所述场景的第一形态切换为场景的第二形态,所述场景的第一形态与所述场景的第二形态相关联。
- 根据权利要求1所述的场景操控方法,其特征在于,所述识别触发对象,包括:获取所述触发对象的特征信息;将所述特征信息与标准特征信息相比对;根据比对结果识别是否为触发对象。
- 根据权利要求2所述的场景操控方法,其特征在于,所述获取所述触发对象的特征信息,具体为:获取所述触发对象上的关键点。
- 根据权利要求1所述的场景操控方法,其特征在于,所述判断所述触发对象的动作,包括:获取触发对象的第一特征信息;获取触发对象的第二特征信息;基于所述第一特征信息和第二特征信息判断触发对象的动作。
- 根据权利要求4所述的场景操控方法,其特征在于,所述判断所述触发对象的动作,包括:获取触发对象的面积;基于所述面积判断所述触发对象的动作。
- 根据权利要求5所述的场景操控方法,其特征在于,所述判断所述触发对象的动作,包括:获取触发对象的第一面积;获取触发对象的第二面积;基于所述第一面积与所述第二面积的比对结果,判断触发对象的动作。
- 根据权利要求5所述的场景操控方法,其特征在于,所述获取触发对象的面积,包括:设定最小规则框,使得触发对象完全包含在最小规则框内,计算最小规则框的面积,得到触发对象的面积。
- 根据权利要求1所述的场景操控方法,其特征在于:所述触发对象为人手。
- 根据权利要求8所述的场景操控方法,其特征在于:所述动作,包括张合、旋转、远近移动或手势变化。
- 一种场景操控装置,其特征在于,包括:显示模块:显示场景的第一形态;识别模块:用于识别触发对象;判断模块:用于判断所述触发对象的动作;控制模块:基于所述动作将所述场景的第一形态切换为场景的第二形态,所述场景的第一形态与所述场景的第二形态相关联。
- 一种电子设备,其特征在于,所述电子设备包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-9任一所述的场景操控方法。
- 一种非暂态计算机可读存储介质,其特征在于,该非暂态计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行权利要求1-9任一所述的场景操控方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2100223.3A GB2590207B (en) | 2018-06-29 | 2019-01-25 | Scene controlling method, device and electronic equipment |
JP2020571800A JP7372945B2 (ja) | 2018-06-29 | 2019-01-25 | シナリオ制御方法、装置および電子装置 |
US16/769,368 US11755119B2 (en) | 2018-06-29 | 2019-01-25 | Scene controlling method, device and electronic equipment |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810699063.9A CN108989553A (zh) | 2018-06-29 | 2018-06-29 | 场景操控的方法、装置及电子设备 |
CN201810699063.9 | 2018-06-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020001015A1 true WO2020001015A1 (zh) | 2020-01-02 |
Family
ID=64539579
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/073076 WO2020001015A1 (zh) | 2018-06-29 | 2019-01-25 | 场景操控的方法、装置及电子设备 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11755119B2 (zh) |
JP (1) | JP7372945B2 (zh) |
CN (1) | CN108989553A (zh) |
GB (1) | GB2590207B (zh) |
WO (1) | WO2020001015A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108989553A (zh) * | 2018-06-29 | 2018-12-11 | 北京微播视界科技有限公司 | 场景操控的方法、装置及电子设备 |
CN112445324A (zh) * | 2019-08-30 | 2021-03-05 | 北京小米移动软件有限公司 | 人机交互方法及装置 |
CN111931762B (zh) * | 2020-09-25 | 2021-07-30 | 广州佰锐网络科技有限公司 | 基于ai的图像识别解决方法、装置及可读存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103019378A (zh) * | 2012-12-07 | 2013-04-03 | 无锡清华信息科学与技术国家实验室物联网技术中心 | 一种移动电子设备手势控制交互方法、装置及移动终端 |
US20130271360A1 (en) * | 2012-04-16 | 2013-10-17 | Qualcomm Incorporated | Interacting with a device using gestures |
CN103383598A (zh) * | 2012-05-04 | 2013-11-06 | 三星电子株式会社 | 终端和基于空间交互控制所述终端的方法 |
CN205304923U (zh) * | 2015-12-23 | 2016-06-08 | 武汉哒呤科技有限公司 | 一种通过手势操作实现交互的手机 |
CN108989553A (zh) * | 2018-06-29 | 2018-12-11 | 北京微播视界科技有限公司 | 场景操控的方法、装置及电子设备 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8872899B2 (en) | 2004-07-30 | 2014-10-28 | Extreme Reality Ltd. | Method circuit and system for human to machine interfacing by hand gestures |
KR101364571B1 (ko) | 2010-10-06 | 2014-02-26 | 한국전자통신연구원 | 영상 기반의 손 검출 장치 및 그 방법 |
CN102226880A (zh) * | 2011-06-03 | 2011-10-26 | 北京新岸线网络技术有限公司 | 一种基于虚拟现实的体感操作方法及系统 |
JP5701714B2 (ja) | 2011-08-05 | 2015-04-15 | 株式会社東芝 | ジェスチャ認識装置、ジェスチャ認識方法およびジェスチャ認識プログラム |
US9734393B2 (en) * | 2012-03-20 | 2017-08-15 | Facebook, Inc. | Gesture-based control system |
US9477303B2 (en) * | 2012-04-09 | 2016-10-25 | Intel Corporation | System and method for combining three-dimensional tracking with a three-dimensional display for a user interface |
JP6207240B2 (ja) | 2013-06-05 | 2017-10-04 | キヤノン株式会社 | 情報処理装置及びその制御方法 |
CN103530613B (zh) * | 2013-10-15 | 2017-02-01 | 易视腾科技股份有限公司 | 一种基于单目视频序列的目标人手势交互方法 |
US10156908B2 (en) * | 2015-04-15 | 2018-12-18 | Sony Interactive Entertainment Inc. | Pinch and hold gesture navigation on a head-mounted display |
JP6398870B2 (ja) | 2015-05-25 | 2018-10-03 | コニカミノルタ株式会社 | ウェアラブル電子機器およびウェアラブル電子機器のジェスチャー検知方法 |
US10643390B2 (en) | 2016-03-30 | 2020-05-05 | Seiko Epson Corporation | Head mounted display, method for controlling head mounted display, and computer program |
JP2018084886A (ja) | 2016-11-22 | 2018-05-31 | セイコーエプソン株式会社 | 頭部装着型表示装置、頭部装着型表示装置の制御方法、コンピュータープログラム |
CN109313499A (zh) | 2016-06-07 | 2019-02-05 | 皇家飞利浦有限公司 | 用于向用户呈现触觉反馈的设备和用于操作该设备的方法 |
CN107589846A (zh) * | 2017-09-20 | 2018-01-16 | 歌尔科技有限公司 | 场景切换方法、装置及电子设备 |
-
2018
- 2018-06-29 CN CN201810699063.9A patent/CN108989553A/zh active Pending
-
2019
- 2019-01-25 WO PCT/CN2019/073076 patent/WO2020001015A1/zh active Application Filing
- 2019-01-25 JP JP2020571800A patent/JP7372945B2/ja active Active
- 2019-01-25 GB GB2100223.3A patent/GB2590207B/en active Active
- 2019-01-25 US US16/769,368 patent/US11755119B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130271360A1 (en) * | 2012-04-16 | 2013-10-17 | Qualcomm Incorporated | Interacting with a device using gestures |
CN103383598A (zh) * | 2012-05-04 | 2013-11-06 | 三星电子株式会社 | 终端和基于空间交互控制所述终端的方法 |
CN103019378A (zh) * | 2012-12-07 | 2013-04-03 | 无锡清华信息科学与技术国家实验室物联网技术中心 | 一种移动电子设备手势控制交互方法、装置及移动终端 |
CN205304923U (zh) * | 2015-12-23 | 2016-06-08 | 武汉哒呤科技有限公司 | 一种通过手势操作实现交互的手机 |
CN108989553A (zh) * | 2018-06-29 | 2018-12-11 | 北京微播视界科技有限公司 | 场景操控的方法、装置及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN108989553A (zh) | 2018-12-11 |
JP7372945B2 (ja) | 2023-11-01 |
JP2021530032A (ja) | 2021-11-04 |
GB202100223D0 (en) | 2021-02-24 |
GB2590207B (en) | 2023-02-08 |
US11755119B2 (en) | 2023-09-12 |
GB2590207A (en) | 2021-06-23 |
US20200311398A1 (en) | 2020-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11017580B2 (en) | Face image processing based on key point detection | |
WO2020259651A1 (zh) | 一种控制用户界面的方法及电子设备 | |
WO2020019663A1 (zh) | 基于人脸的特效生成方法、装置和电子设备 | |
US10021319B2 (en) | Electronic device and method for controlling image display | |
US11749020B2 (en) | Method and apparatus for multi-face tracking of a face effect, and electronic device | |
US11513608B2 (en) | Apparatus, method and recording medium for controlling user interface using input image | |
WO2020001014A1 (zh) | 图像美化方法、装置及电子设备 | |
US11366582B2 (en) | Screenshot capturing method, device, electronic device and computer-readable medium | |
WO2020019664A1 (zh) | 基于人脸的形变图像生成方法和装置 | |
US20130234957A1 (en) | Information processing apparatus and information processing method | |
WO2020001015A1 (zh) | 场景操控的方法、装置及电子设备 | |
WO2020037923A1 (zh) | 图像合成方法和装置 | |
WO2020019665A1 (zh) | 基于人脸的三维特效生成方法、装置和电子设备 | |
WO2017113821A1 (zh) | 一种智能手机操作方法、装置及智能手机 | |
CN110275611B (zh) | 一种参数调节方法、装置和电子设备 | |
US20150063785A1 (en) | Method of overlappingly displaying visual object on video, storage medium, and electronic device | |
WO2020052083A1 (zh) | 侵权图片的识别方法、装置和计算机可读存储介质 | |
JP2024518333A (ja) | マルチスクリーンインタラクション方法及び機器、端末装置、及び車両 | |
WO2022111458A1 (zh) | 图像拍摄方法和装置、电子设备及存储介质 | |
EP2939411A1 (en) | Image capture | |
WO2020037924A1 (zh) | 动画生成方法和装置 | |
WO2020029556A1 (zh) | 自适应平面的方法、装置和计算机可读存储介质 | |
WO2020000975A1 (zh) | 视频拍摄方法、客户端、终端及介质 | |
WO2020029555A1 (zh) | 用于平面间无缝切换的方法、装置和计算机可读存储介质 | |
CN110827413A (zh) | 控制虚拟物体形态改变的方法、装置和计算机可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19825265 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020571800 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 202100223 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20190125 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.04.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19825265 Country of ref document: EP Kind code of ref document: A1 |