CN111984347A - Interaction processing method, device, equipment and storage medium - Google Patents

Interaction processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN111984347A
CN111984347A CN201910425579.9A CN201910425579A CN111984347A CN 111984347 A CN111984347 A CN 111984347A CN 201910425579 A CN201910425579 A CN 201910425579A CN 111984347 A CN111984347 A CN 111984347A
Authority
CN
China
Prior art keywords
camera module
image data
resolution mode
screen
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910425579.9A
Other languages
Chinese (zh)
Inventor
武隽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201910425579.9A priority Critical patent/CN111984347A/en
Publication of CN111984347A publication Critical patent/CN111984347A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files

Abstract

The embodiment includes that a camera module with a dynamic vision sensor is arranged on a terminal device, DVS image data acquired by the camera module is acquired, a target object used for instructing the terminal device to execute operation is identified from the DVS image data, and the terminal device is controlled to execute the operation matched with an operation instruction in response to the operation instruction corresponding to the target object, so that the interaction with the terminal device can be realized without touch and voice input, the generated data volume is small, and the response speed is high.

Description

Interaction processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an interaction processing method, apparatus, device, and storage medium.
Background
With the rapid development of technology, various electronic devices, such as personal computers, tablet computers, smart phones, etc., are emerging. Electronic devices with natural interaction are also gaining favor to more and more people. Therefore, the interaction between the intelligent device and the user becomes the research and development focus of each large intelligent terminal manufacturer, and various technical schemes for realizing the operation interaction with the user on the intelligent terminal appear. However, in the prior art, human-computer interaction is mostly performed based on touch or voice, user experience is not changed greatly, and a user cannot operate the electronic device under special conditions such as inconvenient touch control or voice control.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an interaction processing method, apparatus, device, and storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided an interaction processing method, where the method is applied to a terminal device, the terminal device is provided with a camera module including a dynamic vision sensor DVS, and the method includes:
obtaining DVS image data collected by the camera module;
identifying a target object for instructing a terminal device to perform an operation from the DVS image data;
and responding to the operation instruction corresponding to the target object, and controlling the terminal equipment to execute the operation matched with the operation instruction.
In an embodiment, the operation performed by the terminal device at least includes lighting a screen, and the acquiring DVS image data collected by the camera module includes: and when the screen of the terminal equipment is in a screen-turning state, obtaining the DVS image data collected by the camera module.
In one embodiment, the camera module is configured with a low resolution mode and at least one other resolution mode, the number of pixel units of the camera module in a working state in the low resolution mode is less than the number of pixel units of the camera module in the working state in the other resolution mode, and different modes are switched when a preset mode switching condition is met.
In one embodiment, the camera module is in a normally open state in a low resolution mode, or when a screen of the terminal device is in a screen-off state, the camera module is in the low resolution mode.
In one embodiment, the preset mode switching condition includes any one of the following conditions:
judging whether the change of the current ambient light meets a preset change condition according to DVS image data collected by the camera module in the current mode;
and judging that the object to be identified exists in the acquisition area of the camera module according to the DVS image data acquired by the camera module in the current mode.
In one embodiment, the other resolution modes include a high resolution mode, and the acquiring DVS image data acquired by the camera module includes: obtaining DVS image data collected by the camera module in a high-resolution mode;
the method further comprises the following steps:
acquiring low-resolution DVS image data acquired by the camera module in a low-resolution mode;
and when the change of the current ambient light meets a preset change condition is judged according to the low-resolution DVS image data, controlling the camera module to be switched from a low-resolution mode to a high-resolution mode.
In an embodiment, the acquiring DVS image data collected by the camera module includes: obtaining DVS image data collected by the camera module in a high-resolution mode;
the method further comprises the following steps:
acquiring low-resolution DVS image data acquired by the camera module in a low-resolution mode;
when the change of the current ambient light meets a preset change condition is judged according to the low-resolution DVS image data, controlling the camera module to be switched from a low-resolution mode to a medium-resolution mode;
acquiring medium-resolution DVS image data acquired by the camera module in a medium-resolution mode;
and when judging that an object to be identified exists in the acquisition area of the camera module according to the DVS image data acquired by the camera module in the medium resolution mode, controlling the camera module to be switched from the medium resolution mode to the high resolution mode.
In one embodiment, the target object includes a specified gesture, a specified face, and/or a specified body posture.
In one embodiment, a mapping relation between the target object and the operation instruction is pre-configured, and the operation matched with the operation instruction comprises one or more of the following:
The unlocking screen is triggered in the screen information state;
the starting flashlight is triggered in the screen breath state;
starting a designated application program triggered in a screen breath state;
the method comprises the steps that a specified page for displaying a specified application program is triggered in a screen breath state;
the new message which is triggered in the screen information state and shows the appointed application program;
and receiving the telephone of the dialing party in the screen saver state.
According to a second aspect of the embodiments of the present disclosure, an interaction processing apparatus is provided, where the apparatus is provided in a terminal device, the terminal device is provided with a camera module including a dynamic vision sensor DVS, and the apparatus includes:
the data acquisition module is used for acquiring DVS image data acquired by the camera module;
an object identification module, configured to identify a target object for instructing a terminal device to perform an operation from the DVS image data;
and the operation control module is used for responding to the operation instruction corresponding to the target object and controlling the terminal equipment to execute the operation matched with the operation instruction.
In an embodiment, the operation performed by the terminal device at least includes lighting a screen, and the acquiring DVS image data collected by the camera module includes: and when the screen of the terminal equipment is in a screen-turning state, obtaining the DVS image data collected by the camera module.
In one embodiment, the camera module is configured with a low resolution mode and at least one other resolution mode, the number of pixel units of the camera module in a working state in the low resolution mode is less than the number of pixel units of the camera module in the working state in the other resolution mode, and different modes are switched when a preset mode switching condition is met.
In one embodiment, the preset mode switching condition includes any one of the following conditions:
judging whether the change of the current ambient light meets a preset change condition according to DVS image data collected by the camera module in the current mode;
and judging that the object to be identified exists in the acquisition area of the camera module according to the DVS image data acquired by the camera module in the current mode.
According to a third aspect of the embodiments of the present disclosure, there is provided a computer device comprising a camera module based on a dynamic vision sensor DVS, a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the method according to any one of the above when executing the program.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of any of the methods described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment, the camera module including the dynamic vision sensor is arranged on the terminal device, the DVS image data acquired by the camera module is acquired, the target object used for instructing the terminal device to execute the operation is identified from the DVS image data, and the terminal device is controlled to execute the operation matched with the operation instruction in response to the operation instruction corresponding to the target object, so that interaction with the terminal device can be realized without touch and voice input, and the DVS image data only includes data of pixel units with changed brightness, so that the data volume is low, and the response speed is high.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating an interaction processing method according to an exemplary embodiment of the present disclosure.
Fig. 2 is a schematic diagram illustrating a camera module set-up position according to an exemplary embodiment of the present disclosure.
FIG. 3 is a diagram illustrating several gestures according to an exemplary embodiment of the present disclosure.
FIG. 4 is a flow chart illustrating another interaction processing method according to an exemplary embodiment of the present disclosure.
FIG. 5 is a flow chart illustrating another interaction processing method according to an exemplary embodiment of the present disclosure.
FIG. 6 is a block diagram illustrating an interaction processing device according to an example embodiment of the present disclosure.
FIG. 7 is a block diagram illustrating another interaction processing device according to an example embodiment of the present disclosure.
Fig. 8 is a block diagram illustrating another interaction processing device according to an example embodiment of the present disclosure.
Fig. 9 is a hardware block diagram of a computer device in which an interaction processing apparatus according to an exemplary embodiment of the present disclosure is shown.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
With the wide use of intelligent terminals, more and more people can not leave terminal equipment such as mobile phones. The interaction between the terminal equipment and the user becomes the research and development focus of each large terminal manufacturer, and various technical schemes for realizing the operation interaction with the user on the terminal equipment appear. For example, a human-computer interaction scheme on a touch screen, a voice-based human-computer interaction scheme, etc. Under some special conditions such as inconvenient touch control or voice control, the user can not operate the electronic equipment, and user experience is influenced.
In view of this, an embodiment of the present application provides an interaction scheme, in which a camera module including a dynamic vision sensor is disposed on a terminal device, DVS image data acquired by the camera module is acquired, a target object for instructing the terminal device to perform an operation is identified from the DVS image data, and an operation instruction corresponding to the target object is responded to, and the terminal device is controlled to perform an operation matching with the operation instruction, so that interaction with the terminal device can be performed without touch and voice input, and the DVS image data includes data of a pixel unit where a change in light intensity is detected, so that the data size is low and the response speed is fast.
The interaction processing method provided by this embodiment may be implemented by software, or by a combination of software and hardware, or by hardware, and the related hardware may be composed of two or more physical entities, or may be composed of one physical entity. The method of the embodiment can be applied to the electronic equipment with the camera module. The electronic device may be a portable device such as a smart phone, a smart learning machine, a tablet computer, a notebook computer, a PDA (Personal Digital Assistant), or a fixed device such as a desktop computer, or a wearable device such as a smart band or a smart necklace.
A Dynamic Vision Sensor (DVS), also known as a Dynamic event Sensor, is a biomimic Vision Sensor that mimics the human retina based on pulse-triggered neurons. The sensor has an array of pixel cells formed by a plurality of pixel cells, wherein each pixel cell responds to and records a region of rapid change in light intensity only when a change in light intensity is sensed. The specific composition of the dynamic vision sensor is not set forth herein in any greater detail. The DVS may employ an event-triggered processing mechanism to output an asynchronous stream of event data, which may be, for example, light intensity change information (e.g., a time stamp of the light intensity change and a light intensity value) and the coordinate location of the triggered pixel cell. The response speed of the DVS is not limited by the traditional exposure time and frame rate any more, and a high-speed object moving at a rate of ten thousand frames/second can be detected; the DVS has a larger dynamic range, and can accurately sense and output scene changes in a low-illumination or high-exposure environment; DVS power consumption is lower; since the DVS is independently responsive to intensity changes per pixel cell, the DVS is not affected by motion blur.
The camera module may include a lens, a lens holder, a filter, a capacitor, a resistor, and other components besides the dynamic vision sensor, so as to form a module capable of collecting image data, which is not limited herein.
In one embodiment, a smart phone is taken as an example for illustration, and an execution subject of the embodiment of the present disclosure may be the smart phone, or may be a system service installed in the smart phone. It should be noted that the smart phone is only one application example provided in the embodiment of the present disclosure, and it should not be understood that the technical solution provided in the embodiment of the present disclosure can only be applied in the scenario of the smart phone.
The embodiments of the present disclosure will be described below with reference to the accompanying drawings.
As shown in fig. 1, fig. 1 is a flowchart illustrating an interaction processing method according to an exemplary embodiment of the present disclosure, including the following steps:
in step 102, obtaining DVS image data collected by a camera module;
in step 104, identifying a target object for instructing a terminal device to perform an operation from the DVS image data;
in step 106, in response to the operation instruction corresponding to the target object, the terminal device is controlled to execute an operation matched with the operation instruction.
The method can be used in a terminal device provided with a camera module comprising a dynamic vision sensor DVS. The camera module can be arranged on the outer surface of the terminal device to collect image data of the environment where the terminal device is located. Such as on the front or back of the terminal device. In one embodiment, the camera module is arranged in the surrounding area of the camera of the terminal equipment. For example, the camera module may be disposed in a peripheral area of the front camera, or the camera module may be disposed in a peripheral area of the rear camera. As shown in fig. 2, fig. 2 is a schematic diagram illustrating a camera module set position according to an exemplary embodiment of the present disclosure. This figure takes terminal equipment as an example of a smart phone, and a camera module including a dynamic vision sensor DVS is provided on the right side of a front camera of the smart phone.
The visual sensor collects event data in a scene, and the event can be output when the scene changes. For example, when no object in the scene moves relative to the terminal device, the light intensity detected by the pixel unit in the dynamic vision sensor does not change, when a certain object in the scene moves relative to the terminal device, the light ray changes, and therefore a pixel event is triggered, and an event data stream of the pixel unit with the detected light intensity changes is output, wherein each event data in the event data stream may include the coordinate position of the pixel unit with the detected brightness change and the time stamp information of the triggered moment. The DVS image data may be composed of event data corresponding to the same time stamp information. In the dynamic vision sensor, for a single pixel point, only when the received light intensity changes, an event (pulse) signal is output. Say, the brightness increases beyond a threshold, an event of the brightness increase of the pixel is added. Thus, the DVS image data may be a partial image data, with no event data for pixel cells for which no change in light intensity is detected.
After obtaining the DVS image data collected by the camera module, a target object for instructing the terminal device to perform an operation may be identified from the DVS image data.
As for the target object, the target object is an object for instructing the terminal device to perform an operation. And configuring different kinds of target objects according to the operation required to be executed by the terminal equipment. In one example, the operation performed by the terminal device is an operation performed after the authentication is successful, for example, an operation of unlocking, payment, login, and the like. Accordingly, the target object may be an object for authentication. For example, in a scenario of identity verification through face recognition, the target object may be a designated face; as another example, in a scenario of identity verification through gesture recognition, the target object may be a designated gesture or the like.
In another example, the operation performed by the terminal device is a specified operation, such as an operation of lighting up a screen, opening a system favorite, entering a specified page of a specified application, presenting a new message of the specified application, answering an incoming call in a screen-locked state, and the like. Accordingly, the target object may be an object mapped with the specified operation. For example, a gesture may be specified for the part. For example, a designated gesture or a designated body posture, etc. The designated gesture may be a gesture determined by a hand stroke. For example, a "six" gesture, a bixin gesture, a "2" gesture (also known as a jersey gesture), an ok gesture, a thumbs-up gesture, a palm open gesture, etc., may be possible, as well as gestures that stroke other numbers. As shown in fig. 3, fig. 3 is a diagram illustrating several gestures according to an exemplary embodiment of the present disclosure. It will be appreciated that the schematic diagram illustrates only a few gestures, and indeed other gestures are possible, such as strokes 1, 3, 4, etc. Each gesture may have various modifications as long as the meaning of the corresponding gesture can be expressed, and is not limited herein. The designated body posture may be a hand-up posture, a cross-waist posture, or the like.
In one example, the mapping relationship of the target object and the operation instruction may be configured in advance. The operation instruction may be an instruction instructing the terminal device to perform an operation. In one example, there may be a one-to-one mapping between target objects and operation instructions to enable each target object to trigger a device to perform an operation. In another example, the target object and the operation instruction may be in a many-to-one mapping relationship, so that a plurality of target objects trigger the device to perform an operation. Taking the gesture as an example, a plurality of consecutive gestures may correspond to one operation instruction. For example, the three gestures of '3, 2 and 1' are continuously drawn, the terminal device is triggered to light the screen, and the screen is unlocked.
The mapping relationship between the target object and the operation instruction may be configured in advance by the system or set by the user. For example, a mapping relation setting service is provided for a user to create a mapping relation between a target object and an operation instruction.
As to how to identify the target object from the DVS image data, in one example, a model capable of identifying the target object may be obtained by machine learning, and the target object may be identified from the DVS image data by using the model obtained by learning in the model application stage. For example, a supervised learning mode may be adopted to perform model training by using a preset training sample, so as to obtain a deep learning network model. The training samples may be labeled sample images, and the labels may indicate the location and the category of the target object. The sample image may include DVS image data and may also include images acquired by a conventional image sensor. For each target object, sample images under different shooting angles and/or different deformation conditions may be included in order to improve the recognition rate of the model. As an example, Image Processing may be performed by an Image Signal Processing (ISP) unit.
It should be understood that the above-mentioned identification method of the target object is only an example, and should not be construed as any limitation to the present disclosure, and other existing or future methods of identifying the target object may be applied to the present disclosure, and all of them should be included in the scope of the present disclosure.
The operation executed by the terminal device may be an operation after the authentication is successful, or may be a previously designated operation. For example, the operation matched with the operation instruction includes one or more of the following:
lighting up a screen;
unlocking the screen;
starting the flashlight;
starting a designated application;
displaying a designated page of a designated application;
displaying a new message of a specified application;
and answering the phone of the dialing party.
The lighting of the screen may be to control the screen to be switched from a screen-off state to a screen-on state, and the screen-off state may be a state in which the screen is in a black screen. The bright screen state may be a state in which the screen is lit.
In order to ensure the security of personal information, a user often performs screen locking processing on the terminal device, and after the screen is locked, the content of the terminal device can be checked only through an unlocking mode such as password input. The embodiment can realize automatic unlocking in a mode of identifying the target object.
The designated application may be an application installed in the terminal device, for example, a system application, or a third-party application. For example, the designated application may be a system favorites/photos application. After the designated application is launched, the home page/default page of the designated application may be exposed. Illustratively, the system favorites are opened by a rather mental gesture to view the content in the system favorites.
The designated page for the designated application may be a page that the user desires to view quickly, for example, a payment page for a payment program. The payment page may be a page that includes a payment code. The designated application may be an already started application or an un-started application. The started application program may be a foreground-running application program or a background-running application program. Illustratively, the payment page of the payment program is opened by a gesture of je.
The new message of the designated application program can be all unread messages, or unread messages with the latest receiving time, and the like, and can be set according to requirements. The designated application may be an already started application or an un-started application. The started application program may be a foreground-running application program or a background-running application program. For example, a new WeChat message or an unread message may be opened by an OK gesture.
Regarding answering a phone of a dialer, currently, when a dialing request of the dialer is received, answering of the phone of the dialer can be realized by touching an answering button, and the embodiment realizes automatic call connection by identifying a target object.
For example, regardless of the current state of the terminal device, after the target object is identified, the terminal device may be triggered to perform an operation matching the operation instruction. For example, the terminal device is currently in the screen saver state, and the operation matched with the operation instruction can be completed from the screen saver state. The embodiment directly skips from the screen state to execute the corresponding operation, so that the interaction efficiency can be improved, and new experience is brought to the user. For another example, the terminal device is currently in a bright screen state, and the operation matched with the operation instruction can be completed from the bright screen state.
For example, the operation matched with the operation instruction may be an operation triggered from a screen state. For example, the operation matched with the operation instruction includes one or more of the following:
the unlocking screen is triggered in the screen information state;
the starting flashlight is triggered in the screen breath state;
starting a designated application program triggered in a screen breath state;
The method comprises the steps that a specified page for displaying a specified application program is triggered in a screen breath state;
the new message which is triggered in the screen information state and shows the appointed application program;
and receiving the telephone of the dialing party in the screen saver state.
Therefore, the embodiment can realize the operations of detecting the target object during the black screen to trigger the screen unlocking, starting the flashlight, starting the specified application program, displaying the specified page of the specified application program, displaying the new message of the specified application program or answering the phone of the dialing party and the like, and improve the operation efficiency.
In addition, the operations in the above examples may be combined, for example, the combination is performed according to the current state of the terminal device, for example, if the terminal device is currently in a bright screen state, in order to display the specified page of the specified application program, the screen may be lit up and the screen may be unlocked first, and then the specified page of the specified application program may be displayed. If the appointed application program is not started, the appointed application program can be started before the appointed page is displayed. It is to be understood that various operations in the above embodiments may be arbitrarily combined as long as there is no conflict or contradiction in the combination between the features. Some other operations may need to be performed to achieve the final purpose, and are omitted here, but it should be understood that the operations indispensable in the middle are also included in the operations matching the operation instructions to achieve the final purpose. In addition, the operations performed by the terminal device include, but are not limited to, the above operations, and may also be other operations, which are not listed here.
As can be seen from the above embodiments, in this embodiment, a camera module including a dynamic vision sensor is arranged on a terminal device, DVS image data acquired by the camera module is acquired, a target object used for instructing the terminal device to execute an operation is identified from the DVS image data, and an operation instruction corresponding to the target object is responded to, so that the terminal device is controlled to execute an operation matched with the operation instruction, and therefore, interaction with the terminal device can be performed without touch and voice input, and the stored data amount is low and the response speed is high.
Taking as an example that the operation performed by the terminal device at least includes the screen lighting, as shown in fig. 4, fig. 4 is a flowchart of another interaction processing method shown in the present disclosure according to an exemplary embodiment, and the method may be used in a terminal device, where the terminal device is provided with an image capturing module including a dynamic vision sensor DVS, and includes the following steps:
in step 402, when the screen of the terminal device is in a screen-saving state, DVS image data acquired by the camera module is acquired;
in step 404, identifying a target object for instructing a terminal device to perform an operation from the DVS image data, the operation including at least lighting up a screen;
In step 406, in response to the operation instruction corresponding to the target object, the terminal device is controlled to execute an operation matched with the operation instruction.
Fig. 4 is the same as the related art in fig. 1, and is not repeated herein.
The embodiment can trigger the terminal equipment to execute corresponding operation through the target object when the screen of the terminal equipment is in the screen-turning state. If so, opening the system favorite by comparing the mood with the mood in the breath screen state; opening a payment page of the payment program through a Yes gesture in the state of the information screen; opening a new message of the WeChat or a new message of the short message through an ok gesture in the state of the message screen; answering the call when the screen is locked by the six gestures in the state of the information screen; and unlocking the screen and the like by a full-palm opening gesture in the screen resting state. The embodiment directly skips from the screen state to execute the corresponding operation, so that the interaction efficiency can be improved, and new experience is brought to the user.
In one embodiment, in order to reduce the power consumption of the camera module, different power consumption modes can be configured for the camera module, and the resolution of the image data acquired by the camera module in the different modes is different, so that the camera module can be called as a resolution mode. The power consumption of the camera module in different modes is different, and therefore, the camera module can also be called a power consumption mode. The different resolution modes of the camera module can be divided by the number of pixel units in an on state (working state) in the dynamic vision sensor. Illustratively, a Low Resolution (LR) mode is configured for the camera module. The camera module is only partially in working state under the low resolution mode. Taking 100 ten thousand pixels collected by the vision sensor as an example, twenty-fourth of the pixel units can be turned on, and the rest can be turned off, so as to reduce power consumption. Even a specified number of pixel cells can be controlled to be in an on state and the others to be in an off state. The camera module is also provided with at least one other resolution mode with power consumption higher than the low power consumption, and the number of the pixel units of the camera module in the working state in the low resolution mode is less than that of the pixel units of the camera module in the working state in the other resolution modes. Correspondingly, the resolution ratio of the image data collected by the camera module in the low resolution mode is lower than that of the image data collected by the camera module in other resolution modes.
In one embodiment, to enable detection of the target object, the camera module may be in a normally open state in the low resolution mode. And when the preset mode switching condition is met, performing mode switching. For example, in the screen saver state of the terminal device, the camera module is in the low resolution mode.
Therefore, in the embodiment, the camera module is controlled to be in a normally open state in a low resolution mode, so that real-time detection can be guaranteed, and power consumption can be reduced.
In another embodiment, when the screen of the terminal device is in the screen-turning state, the camera module is in the low resolution mode. And when the screen of the terminal equipment is in a bright screen state, the camera module can be closed.
As for the preset mode switching condition, a preset condition for switching from the low resolution mode to the other resolution mode, or a switching condition between the other resolution modes, or a condition for switching from the other resolution mode to the low resolution mode, etc. may be included.
In one embodiment, the preset mode switching condition includes determining that the change of the current ambient light satisfies a preset change condition according to DVS image data acquired by the camera module in the current mode.
As for the preset change condition, a preset condition for switching the mode according to the change of the ambient light may be used. For example, the preset variation condition may be that the current light intensity variation value of the ambient light is greater than a set threshold. The event data stream in the DVS image data may include illumination intensity, and thus, it may be determined whether the light intensity variation value of the current ambient light is greater than the set threshold according to the illumination intensity of at least two frames of images. In another example, not only the light intensity variation value of the current ambient light is greater than the set threshold, but also the number of pixel units in the DVS image data, which detect that the illumination variation occurs, can be combined to determine whether the variation of the current ambient light satisfies the preset variation condition.
The condition can be a condition for switching a low resolution mode to other resolution modes, whether the current ambient light change meets a preset change condition or not can be judged according to DVS image data collected by the camera module in the low resolution mode, when the preset change condition is met, the camera module can be triggered to be switched to a next-stage mode, and otherwise, the low resolution mode can be maintained. For example, when the switching condition is satisfied, a switching notification may be sent to the camera module, so that the camera module performs mode switching.
By way of example, the other Resolution modes may include a High Resolution (HR) mode. All pixel units of the camera module can be in a working state in a high-resolution mode so as to realize that the camera module collects image data with higher resolution in the high-resolution mode. The step 102 of acquiring DVS image data collected by the camera module may include: and obtaining DVS image data acquired by the camera module in a high-resolution mode. Correspondingly, the method further comprises the following steps:
acquiring low-resolution DVS image data acquired by the camera module in a low-resolution mode;
and when the change of the current ambient light meets a preset change condition is judged according to the low-resolution DVS image data, controlling the camera module to be switched from a low-resolution mode to a high-resolution mode.
In this embodiment, the DVS is in a low resolution, low resolution mode, normally open, detecting only changes in ambient light. When the change of the ambient light is detected to be larger than the set threshold value, the DVS high-resolution mode is triggered, whether the target object is the target object or not is identified, and therefore real-time detection is guaranteed, and meanwhile power consumption can be reduced.
In another embodiment, whether to set a plurality of other resolution modes may be determined according to the resolution requirements of the image required for identifying the target object, for example, in one embodiment, different types of target objects are mapped to different operation instructions, some target objects are identified only by images with medium resolution, and some target objects are identified by images with high resolution, and a plurality of levels of resolution modes may be configured.
Whether to set a plurality of other resolution modes may also be decided depending on whether it is necessary to detect whether there is an object to be recognized before recognizing the target object. For example, in some scenarios, it may be determined whether an object to be recognized exists in the image data first, and then it may be determined whether the object to be recognized is a target object, so as to improve the recognition accuracy. The presence of an object to be recognized is the basis/prerequisite for performing target object recognition. Taking an object as an appointed face as an example, and taking the object to be recognized as the face, whether the face exists in the image data can be judged firstly, and whether the face is the appointed face can be judged under the condition that the face exists. Taking the target object as the body posture as an example, if the object to be recognized is a person, it may be determined whether the person is present in the image data, and if the presence of the person is ensured, the body posture of the person may be determined. Accordingly, the preset mode switching condition may include: and judging that the object to be identified exists in the acquisition area of the camera module according to the DVS image data acquired by the camera module in the current mode.
Wherein the presence of the object to be identified in the image data is a basis/precondition for performing the identification of the target object. Whether the object to be identified exists in the acquisition area of the camera module is judged, and the judgment condition can be a switching condition for switching a low resolution mode to other resolution modes or a switching condition among other resolution modes.
Regarding how to determine whether the object to be identified exists in the acquisition area of the camera module, in one embodiment, whether the object to be identified exists in the acquisition area of the camera module may be determined according to whether the outline of the object to be identified exists in the image data. It can be understood that other means may also be adopted to determine whether the object to be recognized exists in the acquisition area of the camera module, for example, an algorithm of whether a human face exists/whether a human exists in the related art is adopted to determine whether the object to be recognized exists in the acquisition area of the camera module.
For example, the other Resolution modes include a Middle Resolution Mode (MR) and a high Resolution mode, and the number of the pixel units of the image capturing module in the working state in the low Resolution mode, the Middle Resolution mode, and the high Resolution mode is increased in sequence. The step 102 of acquiring DVS image data collected by the camera module may include: and obtaining DVS image data acquired by the camera module in a high-resolution mode. Correspondingly, the method further comprises the following steps:
acquiring low-resolution DVS image data acquired by the camera module in a low-resolution mode;
when the change of the current ambient light meets a preset change condition is judged according to the low-resolution DVS image data, controlling the camera module to be switched from a low-resolution mode to a medium-resolution mode;
Acquiring medium-resolution DVS image data acquired by the camera module in a medium-resolution mode;
and when judging that an object to be identified exists in the acquisition area of the camera module according to the DVS image data acquired by the camera module in the medium resolution mode, controlling the camera module to be switched from the medium resolution mode to the high resolution mode.
The embodiment configures three levels of resolution modes, and switches successively, so that power consumption can be reduced.
The condition for switching the other resolution mode to the low resolution mode may be to determine that the current ambient light variation value is less than or equal to a set threshold value according to the low resolution DVS image data; or judging that the object to be identified does not exist in the acquisition area of the camera module according to the DVS image data acquired by the camera module in the medium resolution mode; or after the control terminal device executes the operation matched with the operation instruction, the preset delay time is set at intervals, and the like. The condition that other resolution modes are switched to the low resolution mode is set, so that the camera module is guaranteed to be in the low resolution mode most of the time, and power consumption is further reduced.
It should be understood that the preset mode switching condition is only an example and should not be construed as any limitation to the present disclosure, and other existing or future conditions for triggering mode switching may be applicable to the present disclosure and shall be included in the protection scope of the present disclosure.
The various technical features in the above embodiments can be arbitrarily combined, so long as there is no conflict or contradiction between the combinations of the features, but the combination is limited by the space and is not described one by one, and therefore, any combination of the various technical features in the above embodiments also belongs to the scope disclosed in the present specification.
One of the combinations is exemplified below.
As shown in fig. 5, fig. 5 is a flowchart of another interaction processing method shown in the present disclosure according to an exemplary embodiment, which may be used in a terminal device provided with a camera module including a dynamic vision sensor DVS, the camera module being configured with a low resolution mode and a high resolution mode, the method including:
in step 502, acquiring low-resolution DVS image data acquired by the camera module in a low-resolution mode;
in step 504, it is determined whether the current ambient light variation value is greater than a set threshold value based on the low-resolution DVS image data, if not, the process returns to step 502, and if so, the process proceeds to step 506.
In step 506, the image capturing module is controlled to switch from the low resolution mode to the high resolution mode.
In step 508, DVS image data acquired by the camera module in the high resolution mode is acquired;
In step 510, identifying a target object for instructing a terminal device to perform an operation from the DVS image data;
in step 512, in response to the operation instruction corresponding to the target object, the terminal device is controlled to execute an operation matched with the operation instruction.
Fig. 5 is the same as the related art in fig. 1 or fig. 4, and is not repeated herein. The embodiment configures a low resolution mode LR and a high resolution mode HR for the camera module, and performs mode switching according to different scenes to optimize power.
In an example, the operation of the terminal device may be limited to an operation triggered in a screen-saving state, and step 502 may be performed when the screen of the terminal device is in the screen-saving state, and when the screen of the terminal device is in a bright screen state, the camera module may be switched to a low resolution mode, or the camera module may be turned off. According to the embodiment, the DVS is utilized for low power consumption, the target object can be identified in the screen-saving state, the user can unlock the screen in a dark mode, and the target object is combined to control the terminal equipment to directly execute certain specified operation in a bright screen mode.
Corresponding to the embodiment of the interaction processing method, the disclosure also provides embodiments of an interaction processing device, equipment applied by the device and a storage medium.
As shown in fig. 6, fig. 6 is a block diagram of an interaction processing apparatus according to an exemplary embodiment, where the apparatus is provided in a terminal device, the terminal device is provided with a camera module including a dynamic vision sensor DVS, and the apparatus includes:
a data obtaining module 62, configured to obtain DVS image data acquired by the camera module;
an object recognition module 64, configured to recognize a target object for instructing a terminal device to perform an operation from the DVS image data;
and the operation control module 66 is used for responding to an operation instruction corresponding to the target object and controlling the terminal equipment to execute an operation matched with the operation instruction.
In an embodiment, the operation performed by the terminal device at least includes lighting a screen, and the acquiring DVS image data collected by the camera module includes: and when the screen of the terminal equipment is in a screen-turning state, obtaining the DVS image data collected by the camera module.
In one embodiment, the camera module is configured with a low resolution mode and at least one other resolution mode, the number of pixel units of the camera module in a working state in the low resolution mode is less than the number of pixel units of the camera module in the working state in the other resolution mode, and different modes are switched when a preset mode switching condition is met.
In one embodiment, the preset mode switching condition includes any one of the following conditions:
judging whether the change of the current ambient light meets a preset change condition according to DVS image data collected by the camera module in the current mode;
and judging that the object to be identified exists in the acquisition area of the camera module according to the DVS image data acquired by the camera module in the current mode.
In one embodiment, the camera module is in a normally open state in a low resolution mode, or when a screen of the terminal device is in a screen-off state, the camera module is in the low resolution mode.
In an embodiment, as shown in fig. 7, fig. 7 is a block diagram of another interactive processing device shown in the present disclosure according to an exemplary embodiment, on the basis of the foregoing embodiment shown in fig. 6, the other resolution modes include a high resolution mode, and the data obtaining module 62 is configured to: obtaining DVS image data collected by the camera module in a high-resolution mode;
the data acquisition module 62 is further configured to: acquiring low-resolution DVS image data acquired by the camera module in a low-resolution mode;
the apparatus further includes a mode switching module 70, configured to control the camera module to switch from the low resolution mode to the high resolution mode when it is determined that the change of the current ambient light satisfies a preset change condition according to the low resolution DVS image data.
In an embodiment, as shown in fig. 8, fig. 8 is a block diagram of another interactive processing device shown in the present disclosure according to an exemplary embodiment, on the basis of the foregoing embodiment shown in fig. 6, where the other resolution modes include a medium resolution mode and a high resolution mode, and the data obtaining module 62 is configured to: obtaining DVS image data collected by the camera module in a high-resolution mode;
the data acquisition module 62 is further configured to: acquiring low-resolution DVS image data acquired by the camera module in a low-resolution mode;
the apparatus further includes a mode switching module 80, configured to control the image capturing module to switch from the low resolution mode to the medium resolution mode when it is determined that the change of the current ambient light satisfies a preset change condition according to the low resolution DVS image data;
the data acquisition module 62 is further configured to: acquiring medium-resolution DVS image data acquired by the camera module in a medium-resolution mode;
the mode switching module 80 is further configured to control the camera module to switch from the middle resolution mode to the high resolution mode when it is determined that the object to be identified exists in the acquisition area of the camera module according to the DVS image data acquired by the camera module in the middle resolution mode.
In one embodiment, the target object includes a specified gesture, a specified face, and/or a specified body posture.
In one embodiment, a mapping relation between the target object and the operation instruction is pre-configured, and the operation matched with the operation instruction comprises one or more of the following:
the unlocking screen is triggered in the screen information state;
the starting flashlight is triggered in the screen breath state;
starting a designated application program triggered in a screen breath state;
the method comprises the steps that a specified page for displaying a specified application program is triggered in a screen breath state;
the new message which is triggered in the screen information state and shows the appointed application program;
and receiving the telephone of the dialing party in the screen saver state.
Accordingly, the present disclosure also provides an electronic device, which includes a camera module based on a dynamic vision sensor DVS, a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method according to any one of the above embodiments when executing the program.
Accordingly, the present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the methods described above.
The present disclosure may take the form of a computer program product embodied on one or more storage media including, but not limited to, disk storage, CD-ROM, optical storage, and the like, having program code embodied therein. Computer-usable storage media include permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
The specific details of the implementation process of the functions and actions of each module in the device are referred to the implementation process of the corresponding step in the method, and are not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
As shown in fig. 9, fig. 9 is a hardware structure diagram of a computer device in which an interaction processing apparatus according to an exemplary embodiment of the present disclosure is located. The apparatus 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like.
Referring to fig. 9, apparatus 900 may include one or more of the following components: processing component 902, memory 904, power component 906, multimedia component 908, audio component 910, input/output (I/O) interface 912, sensor component 914, and communication component 916. The device 900 is also provided with a camera module comprising a dynamic vision sensor, not shown in fig. 9.
The processing component 902 generally controls overall operation of the device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 902 may include one or more processors 920 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 902 can include one or more modules that facilitate interaction between processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operation at the apparatus 900. Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 906 provides power to the various components of the device 900. The power components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 900.
The multimedia component 908 comprises a screen providing an output interface between the device 900 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 900 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 910 is configured to output and/or input audio signals. For example, audio component 910 includes a Microphone (MIC) configured to receive external audio signals when apparatus 900 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 904 or transmitted via the communication component 916. In some embodiments, audio component 910 also includes a speaker for outputting audio signals.
I/O interface 912 provides an interface between processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 914 includes one or more sensors for providing status assessment of various aspects of the apparatus 900. For example, sensor assembly 914 may detect an open/closed state of device 900, the relative positioning of components, such as a display and keypad of device 900, the change in position of device 900 or one of the components of device 900, the presence or absence of user contact with device 900, the orientation or acceleration/deceleration of device 900, and the change in temperature of device 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communications between the apparatus 900 and other devices in a wired or wireless manner. The apparatus 900 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 904 comprising instructions, executable by the processor 920 of the apparatus 900 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Wherein the instructions in the storage medium, when executed by the processor, enable the apparatus 900 to perform any of the above-described interaction processing methods.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (15)

1. An interactive processing method is applied to a terminal device, wherein the terminal device is provided with a camera module comprising a dynamic vision sensor DVS, and the method comprises the following steps:
obtaining DVS image data collected by the camera module;
identifying a target object for instructing a terminal device to perform an operation from the DVS image data;
and responding to the operation instruction corresponding to the target object, and controlling the terminal equipment to execute the operation matched with the operation instruction.
2. The method according to claim 1, wherein the operation performed by the terminal device at least includes lighting up a screen, and the acquiring DVS image data collected by the camera module includes: and when the screen of the terminal equipment is in a screen-turning state, obtaining the DVS image data collected by the camera module.
3. The method according to claim 1, wherein the camera module is configured with a low resolution mode and at least one other resolution mode, the number of pixel units of the camera module in the working state in the low resolution mode is less than the number of pixel units of the camera module in the working state in the other resolution mode, and different modes are switched when a preset mode switching condition is met.
4. The method according to claim 3, wherein the camera module is in a normally open state in a low resolution mode, or in a low resolution mode when a screen of the terminal device is in a screen-resting state.
5. The method according to claim 3, wherein the preset mode switching condition comprises any one of the following conditions:
judging whether the change of the current ambient light meets a preset change condition according to DVS image data collected by the camera module in the current mode;
and judging that the object to be identified exists in the acquisition area of the camera module according to the DVS image data acquired by the camera module in the current mode.
6. The method of claim 5, wherein the other resolution modes include a high resolution mode, and wherein the acquiring DVS image data acquired by the camera module comprises: obtaining DVS image data collected by the camera module in a high-resolution mode;
the method further comprises the following steps:
acquiring low-resolution DVS image data acquired by the camera module in a low-resolution mode;
and when the change of the current ambient light meets a preset change condition is judged according to the low-resolution DVS image data, controlling the camera module to be switched from a low-resolution mode to a high-resolution mode.
7. The method according to claim 5, wherein the other resolution modes include a medium resolution mode and a high resolution mode, and the acquiring the DVS image data acquired by the camera module comprises: obtaining DVS image data collected by the camera module in a high-resolution mode;
the method further comprises the following steps:
acquiring low-resolution DVS image data acquired by the camera module in a low-resolution mode;
when the change of the current ambient light meets a preset change condition is judged according to the low-resolution DVS image data, controlling the camera module to be switched from a low-resolution mode to a medium-resolution mode;
acquiring medium-resolution DVS image data acquired by the camera module in a medium-resolution mode;
and when judging that an object to be identified exists in the acquisition area of the camera module according to the DVS image data acquired by the camera module in the medium resolution mode, controlling the camera module to be switched from the medium resolution mode to the high resolution mode.
8. The method of any one of claims 1 to 7, wherein the target object comprises a specified gesture, a specified face, and/or a specified body posture.
9. The method according to any one of claims 1 to 7, wherein a mapping relation between the target object and the operation instruction is pre-configured, and the operation matched with the operation instruction comprises one or more of the following:
the unlocking screen is triggered in the screen information state;
the starting flashlight is triggered in the screen breath state;
starting a designated application program triggered in a screen breath state;
the method comprises the steps that a specified page for displaying a specified application program is triggered in a screen breath state;
the new message which is triggered in the screen information state and shows the appointed application program;
and receiving the telephone of the dialing party in the screen saver state.
10. The utility model provides an interactive processing device, its characterized in that, terminal equipment is located to the device, terminal equipment is provided with the module of making a video recording including dynamic vision sensor DVS, the device includes:
the data acquisition module is used for acquiring DVS image data acquired by the camera module;
an object identification module, configured to identify a target object for instructing a terminal device to perform an operation from the DVS image data;
and the operation control module is used for responding to the operation instruction corresponding to the target object and controlling the terminal equipment to execute the operation matched with the operation instruction.
11. The apparatus according to claim 10, wherein the operation performed by the terminal device at least includes lighting up a screen, and the acquiring DVS image data collected by the camera module includes: and when the screen of the terminal equipment is in a screen-turning state, obtaining the DVS image data collected by the camera module.
12. The apparatus according to claim 10, wherein the camera module is configured with a low resolution mode and at least one other resolution mode, the number of pixel units of the camera module in the low resolution mode is smaller than the number of pixel units of the camera module in the other resolution mode, and different modes are switched when a preset mode switching condition is met.
13. The apparatus of claim 12, wherein the preset mode switching condition comprises any one of the following conditions:
judging whether the change of the current ambient light meets a preset change condition according to DVS image data collected by the camera module in the current mode;
and judging that the object to be identified exists in the acquisition area of the camera module according to the DVS image data acquired by the camera module in the current mode.
14. A computer device comprising a dynamic vision sensor DVS based camera module, a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method of any of claims 1 to 9.
15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 9.
CN201910425579.9A 2019-05-21 2019-05-21 Interaction processing method, device, equipment and storage medium Pending CN111984347A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910425579.9A CN111984347A (en) 2019-05-21 2019-05-21 Interaction processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910425579.9A CN111984347A (en) 2019-05-21 2019-05-21 Interaction processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111984347A true CN111984347A (en) 2020-11-24

Family

ID=73436995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910425579.9A Pending CN111984347A (en) 2019-05-21 2019-05-21 Interaction processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111984347A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112929491A (en) * 2021-02-07 2021-06-08 展讯通信(上海)有限公司 Application program starting method and related device
CN112949440A (en) * 2021-02-22 2021-06-11 豪威芯仑传感器(上海)有限公司 Method for extracting gait features of pedestrian, gait recognition method and system
WO2022165736A1 (en) * 2021-02-02 2022-08-11 豪威芯仑传感器(上海)有限公司 Method and system for identifying hand sliding direction
CN115242952A (en) * 2022-07-28 2022-10-25 联想(北京)有限公司 Image acquisition method and device
CN116708655A (en) * 2022-10-20 2023-09-05 荣耀终端有限公司 Screen control method based on event camera and electronic equipment

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022165736A1 (en) * 2021-02-02 2022-08-11 豪威芯仑传感器(上海)有限公司 Method and system for identifying hand sliding direction
CN112929491A (en) * 2021-02-07 2021-06-08 展讯通信(上海)有限公司 Application program starting method and related device
CN112929491B (en) * 2021-02-07 2022-08-26 展讯通信(上海)有限公司 Application program starting method and related device
CN112949440A (en) * 2021-02-22 2021-06-11 豪威芯仑传感器(上海)有限公司 Method for extracting gait features of pedestrian, gait recognition method and system
CN115242952A (en) * 2022-07-28 2022-10-25 联想(北京)有限公司 Image acquisition method and device
CN116708655A (en) * 2022-10-20 2023-09-05 荣耀终端有限公司 Screen control method based on event camera and electronic equipment
CN116708655B (en) * 2022-10-20 2024-05-03 荣耀终端有限公司 Screen control method based on event camera and electronic equipment

Similar Documents

Publication Publication Date Title
CN106572299B (en) Camera opening method and device
CN111988493B (en) Interaction processing method, device, equipment and storage medium
CN110554815B (en) Icon awakening method, electronic device and storage medium
CN111984347A (en) Interaction processing method, device, equipment and storage medium
EP3136793A1 (en) Method and apparatus for awakening electronic device
US20180091580A1 (en) Method and apparatus for controlling device
CN112118380B (en) Camera control method, device, equipment and storage medium
US9924090B2 (en) Method and device for acquiring iris image
EP3299946B1 (en) Method and device for switching environment picture
CN106357934B (en) Screen locking control method and device
CN112650405B (en) Interaction method of electronic equipment and electronic equipment
CN110262692B (en) Touch screen scanning method, device and medium
EP3208742A1 (en) Method and apparatus for detecting pressure
US20190370584A1 (en) Collecting fingerprints
CN109039877A (en) A kind of method, apparatus, electronic equipment and storage medium showing unread message quantity
US10810439B2 (en) Video identification method and device
CN112114653A (en) Terminal device control method, device, equipment and storage medium
CN106896917B (en) Method and device for assisting user in experiencing virtual reality and electronic equipment
CN107422911B (en) Pressure value detection method and device and computer readable storage medium
CN108962189A (en) Luminance regulating method and device
CN109922203A (en) Terminal puts out screen method and apparatus
CN107580117A (en) Control method of electronic device and device
CN114187874A (en) Brightness adjusting method and device and storage medium
CN113315904A (en) Imaging method, imaging device, and storage medium
CN108334762B (en) Terminal unlocking method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination