CN105827900A - Data processing method and electronic device - Google Patents

Data processing method and electronic device Download PDF

Info

Publication number
CN105827900A
CN105827900A CN201610201362.6A CN201610201362A CN105827900A CN 105827900 A CN105827900 A CN 105827900A CN 201610201362 A CN201610201362 A CN 201610201362A CN 105827900 A CN105827900 A CN 105827900A
Authority
CN
China
Prior art keywords
image data
instruction
image
obtaining
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610201362.6A
Other languages
Chinese (zh)
Inventor
孙春阳
孙晓路
陈子冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ninebot Beijing Technology Co Ltd
Original Assignee
Ninebot Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ninebot Beijing Technology Co Ltd filed Critical Ninebot Beijing Technology Co Ltd
Priority to CN201610201362.6A priority Critical patent/CN105827900A/en
Publication of CN105827900A publication Critical patent/CN105827900A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention relates to the field of image acquisition, and discloses a data processing method and an electronic device, for solving the technical problem of incapability of obtaining shooting materials before a user starts operation of recording a video in the prior art. The method comprises the following steps: obtaining a first instruction which indicates to carry out image acquisition; in response to the first instruction, executing the image acquisition and obtaining first image data, and storing the first image data in a first storage space; obtaining a second instruction which indicates to obtain second image data; and in response to the second instruction, obtaining the second image data from the first image data through interception, wherein the second image data at least comprises at least one image obtained before the electronic device obtains the second instruction. According to the invention, the technical effect of obtaining the second image data by use of the shooting materials before the second instruction is obtained can be realized.

Description

Data processing method and electronic equipment
Technical Field
The present invention relates to the field of image acquisition, and in particular, to a data processing method and an electronic device.
Background
With the continuous development of science and technology, electronic technology has also gained rapid development, and the variety of electronic products is more and more, and people also enjoy various conveniences brought by the development of science and technology. People can enjoy comfortable life brought along with the development of science and technology through various types of electronic equipment. For example, electronic devices such as a notebook computer, a desktop computer, a smart phone, and a tablet computer have become an important part of people's life, and a user can listen to music, play games, and the like by using the electronic devices such as the mobile phone and the tablet computer, so as to relieve the pressure of modern fast-paced life.
In the prior art, most electronic equipment comprises a camera, so that the electronic equipment has a video recording function. In the video recording process, the method comprises the following steps: detecting and obtaining the operation that a user starts to record the video, and starting to record the video by the electronic equipment; and then detecting the operation of finishing recording the video by the user, and finishing recording the video by the electronic equipment. The recorded video is the video data included between the time of starting to record the video and the time of finishing to record the video, however, the video recording process in the prior art has the following technical problems: the electronic device starts recording after obtaining the operation of starting recording the video by the user, and only video data after the operation of starting recording the video by the user can be obtained, however, in many cases, a shooting material before the operation of starting recording the video by the user is valuable to the user, and the prior art cannot obtain the shooting material before the operation of starting recording the video by the user.
Disclosure of Invention
The invention provides a data processing method and electronic equipment, which are used for solving the technical problem that shooting materials before a user starts to record a video cannot be obtained in the prior art.
In a first aspect, an embodiment of the present invention provides a data processing method applied to an electronic device, where the method includes:
obtaining a first instruction, wherein the first instruction instructs to acquire an image;
responding to the first instruction, executing image acquisition, obtaining first image data, and storing the first image data in a first storage space;
obtaining a second instruction, wherein the second instruction indicates that second image data is obtained;
and in response to the second instruction, intercepting and obtaining second image data from the first image data, wherein the second image data at least comprises at least one image obtained by the electronic equipment before obtaining the second instruction.
Optionally, the second image data includes:
the electronic equipment obtains at least one piece of image data within a first preset time period before the second instruction is obtained; or,
the electronic equipment obtains at least one piece of image data within a first preset time period before the second instruction is obtained, and the electronic equipment obtains at least one piece of image data within a second preset time period after the second instruction is obtained.
Optionally, after the second image data is obtained by intercepting from the first image data, the method further includes:
obtaining a third instruction, wherein the third instruction instructs to extract and save image data meeting the first condition from the second image data;
and responding to the third instruction, extracting image data meeting the first condition from the second image data, and storing the image data meeting the first condition as third image data in a second storage space.
Optionally, the extracting, from the second image data, image data satisfying the first condition includes:
and intercepting corresponding video data from the second image data based on the obtained first gesture, wherein the video data is the third image data.
Optionally, the first gesture includes a first sub-gesture and a second sub-gesture, and the capturing corresponding video data from the second image data based on the obtained first gesture includes:
detecting and obtaining a distance value between the first sub-gesture and the second sub-gesture;
and determining the length corresponding to the progress bar of the video according to the distance value and the mapping relation between the preset distance value and the length of the video progress bar.
Optionally, the intercepting, based on the obtained first gesture, corresponding video data from the second image data includes:
detecting and obtaining a first moving operation of a first hand of the user, and taking an end position of the first moving operation as a starting point of video capture;
detecting and obtaining a second moving operation of a second hand of the user, and taking the end position of the second moving operation as the end point of video capture;
and intercepting image data between the starting point and the ending point from the second image data as the video data.
Optionally, after the detecting obtains the first movement operation of the first hand of the user, the method further includes: if a first tangential operation aiming at the second image data is detected, taking a position corresponding to the first tangential operation on a progress bar of the second image data as an end position of the first moving operation, wherein the first tangential operation is a moving operation of the first hand relative to a display unit of the electronic equipment from far to near or from top to bottom; and/or
After the detecting obtains a second movement operation of a second hand of the user, the method further comprises: and if a second tangential operation aiming at the second image data is detected, taking a position corresponding to the second tangential operation on the progress bar of the second image data as an end position of the second movement operation, wherein the second tangential operation is the movement operation of the second hand relative to a display unit of the electronic equipment from far to near or from top to bottom.
Optionally, the extracting, from the second image data, image data satisfying the first condition includes:
decomposing the second image data into at least one image;
judging whether each image in the at least one image contains a preset feature or not;
and acquiring an image containing the preset features from at least one image as the third image data based on the judgment result.
Optionally, after the second image data is obtained by intercepting from the first image data in response to the second instruction, the method further includes:
and deleting the image data stored in the first storage space.
Optionally, the deleting the image data stored in the first storage space includes:
deleting all image data stored in the first storage space; or
Deleting the intercepted second image data in the first storage space; or
Deleting the image data saved in the first storage space before the electronic equipment obtains the first instruction.
In a second aspect, an embodiment of the present invention provides an electronic device, including:
the device comprises a first obtaining module, a second obtaining module and a control module, wherein the first obtaining module is used for obtaining a first instruction, and the first instruction indicates image acquisition;
the first response module is used for responding to the first instruction, executing image acquisition, obtaining first image data and storing the first image data in a first storage space;
a second obtaining module, configured to obtain a second instruction, where the second instruction indicates to obtain second image data;
and the second response module is used for responding to the second instruction and intercepting and obtaining second image data from the first image data, wherein the second image data at least comprises at least one image obtained by the electronic equipment before the second instruction is obtained.
Optionally, the electronic device further includes:
a third obtaining module, configured to obtain a third instruction, where the third instruction instructs to extract and save image data satisfying the first condition from the second image data;
and the third response module is used for responding to the third instruction, extracting the image data meeting the first condition from the second image data, and storing the image data meeting the first condition as third image data in a second storage space.
Optionally, the third response module is configured to: and intercepting corresponding video data from the second image data based on the obtained first gesture, wherein the video data is the third image data.
Optionally, the third response module includes:
a decomposition unit configured to decompose the second image data into at least one image;
the judging unit is used for judging whether each image in the at least one image contains preset characteristics;
an obtaining unit configured to obtain, as the third image data, an image including the preset feature from at least one image based on a determination result.
The invention has the following beneficial effects:
in the embodiment of the invention, the first instruction is obtained, and the first instruction indicates image acquisition; responding to the first instruction, executing image acquisition, obtaining first image data, and storing the first image data in a first storage space; obtaining a second instruction, wherein the second instruction indicates that second image data is obtained; and in response to the second instruction, intercepting and obtaining second image data from the first image data, wherein the second image data at least comprises at least one image obtained by the electronic equipment before obtaining the second instruction. That is, when the second instruction for obtaining the second image data is detected, the second image data can be obtained based on the first image data that was previously acquired and saved, thereby achieving a technical effect that the second image data can be obtained using the shooting material before the second instruction was obtained.
Drawings
FIG. 1 is a flow chart of a data processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart of acquiring third image data in the data processing method according to the embodiment of the present invention;
fig. 3 is a flowchart of a first manner of acquiring third image data through a first gesture in the data processing method according to the embodiment of the present invention;
fig. 4 is a flowchart of a second manner of acquiring third image data through a first gesture in the data processing method according to the embodiment of the present invention;
FIG. 5 is a flowchart of another manner of acquiring third image data in the data processing method according to the embodiment of the present invention;
FIG. 6 is a flow chart of a data processing method applied to a balance car in an embodiment of the present invention;
fig. 7 is a block diagram of an electronic device in an embodiment of the present invention.
Detailed Description
The invention provides a data processing method and electronic equipment, which are used for solving the technical problem that shooting materials before a user starts to record a video cannot be obtained in the prior art.
In order to solve the technical problems, the technical scheme in the embodiment of the invention has the following general idea:
obtaining a first instruction, wherein the first instruction instructs to acquire an image; responding to the first instruction, executing image acquisition, obtaining first image data, and storing the first image data in a first storage space; obtaining a second instruction, wherein the second instruction indicates that second image data is obtained; and in response to the second instruction, intercepting and obtaining second image data from the first image data, wherein the second image data at least comprises at least one image obtained by the electronic equipment before obtaining the second instruction. That is, when the second instruction for obtaining the second image data is detected, the second image data can be obtained based on the first image data that was previously acquired and saved, thereby achieving a technical effect that the second image data can be obtained using the shooting material before the second instruction was obtained.
In order to better understand the technical solutions of the present invention, the following detailed descriptions of the technical solutions of the present invention are provided with the accompanying drawings and the specific embodiments, and it should be understood that the specific features in the embodiments and the examples of the present invention are the detailed descriptions of the technical solutions of the present invention, and are not limitations of the technical solutions of the present invention, and the technical features in the embodiments and the examples of the present invention may be combined with each other without conflict.
In a first aspect, an embodiment of the present invention provides a data processing method, please refer to fig. 1, including:
step S101: obtaining a first instruction, wherein the first instruction instructs to acquire an image;
step S102: responding to the first instruction, executing image acquisition, obtaining first image data, and storing the first image data in a first storage space;
step S103: obtaining a second instruction, wherein the second instruction indicates that second image data is obtained;
step S104: and in response to the second instruction, intercepting and obtaining second image data from the first image data, wherein the second image data at least comprises at least one image obtained by the electronic equipment before obtaining the second instruction.
For example, the scheme is applied to an electronic device with an image capturing function, where the electronic device may capture an image through a camera thereof, or may capture data through a camera in data connection with the electronic device, and the electronic device is, for example: a mobile phone, a tablet computer, a notebook computer, a balance car, an unmanned aerial vehicle, etc., and the embodiments of the present invention are not limited.
In step S101, the first instruction may be triggered in various ways, for example: a predetermined gesture, a predetermined button, etc., wherein the electronic device obtains a corresponding first instruction, often upon detection of a control operation that controls the electronic device to enter a specific use state, such as: a startup state, a state of starting a camera, a state of starting a photographing APP (Application), a state of starting a video recording APP, and the like. In this case, the first image data can be automatically acquired without additional triggering operation by the user.
In step S102, the first image data includes, for example: video, images, and the like. If the first image data is a video, the video can be continuously recorded and saved after the first instruction is responded; if the first image data is an image, the image may be acquired and saved at preset time intervals (for example, 0.001s, 0.002s, 1s, 5s, and the like) after responding to the first instruction, and of course, the first image data may also be acquired and acquired in other ways, which is not illustrated in detail and is not limited in the embodiment of the present invention.
When the first image data is saved, the first image data may be saved in a cache manner, or the first image data may be directly saved in a hard disk of the electronic device. Preferably, the first image data is stored in a buffer memory of the electronic device.
As an alternative embodiment, when performing image acquisition based on step S102 and obtaining the first image data, the method further includes: and prompting for acquiring the first image data.
In the specific implementation process, the prompt can be made in various ways, for example: through the flashing of the indicator light, the voice prompt and the like, on the basis of the aspect, the privacy of the user can be protected, and the user is prevented from being mistakenly recorded into the first image data; on the other hand, the first image data can be prompted to be successfully acquired, so that the situation that the first image data is not successfully acquired due to equipment failure is prevented.
In step S103, the second instruction may be obtained in various ways, for example: the embodiment of the present invention does not describe in detail and is not limited to what manner the second instruction is obtained according to the preset gesture generated by the electronic device, the preset button clicked on the electronic device, the second instruction sent by another electronic device, and the like.
In step S104, the second image data may also be an image or a video, which is not limited in the embodiment of the present invention. The second image data may include a plurality of contents, two of which are listed below for description, and of course, in the implementation process, the following two cases are not limited.
First, the second image data includes: the electronic equipment obtains at least one piece of image data within a first preset time period before the second instruction is obtained.
For example, the first preset time period is, for example: 1min, 2min, and the like, wherein the setting may be set by the electronic device by default, or may be set manually by a user of the electronic device, and the embodiment of the present invention is not limited.
Assuming that the electronic device detects a control command for controlling the electronic device to be in an on state at 14:00, the electronic device enters the on state in response to the control command, and the electronic device generates a first command to start automatically recording first image data, the electronic device detects a second command at 14:15, in this case, the acquired second image data is image data acquired before 14:15, for example: 14: 14-14: 15, 14: 10-14: 15, and the like, although the above time points and time periods are merely examples, and the embodiments of the present invention are not limited thereto.
This approach is often applied to image acquisition processes, such as: in the prior art, only image data after the second instruction is detected can be acquired, so that delay may exist in acquiring the image data of 14:14, but in the embodiment of the present invention, the image data that the user desires to acquire may exist.
Second, the second image data includes: the electronic equipment obtains at least one piece of image data within a first preset time period before the second instruction is obtained, and the electronic equipment obtains at least one piece of image data within a second preset time period after the second instruction is obtained.
For example, the first preset time period is, for example: 1min, 2min, etc., and the second preset time period is, for example: 2min, 5min, and so on, wherein the first preset time period and the second preset time period may be set by default by the system or manually by the user, and the second preset time period may not be a fixed time period but determined based on the user's operation when the second image data is obtained.
Assuming that the electronic device detects a control command for controlling the electronic device to be in an on state at 14:00, the electronic device enters the on state in response to the control command, and the electronic device generates a first command to start automatic recording of first image data, the electronic device detects a second command at 14:15, in which case the acquired second image data includes both image data before 14:15 and image data after 14:15, for example: 14: 10-14: image data between 20, image data between 14:12 and 14:17, etc., although the above time points and time periods are also only examples, and the embodiment of the present invention is not limited thereto.
The scheme is often applied to the image continuous shooting or video acquisition process, for example: if the user wants to take the image data continuously at 14:14, the electronic device may obtain the second instruction at 14:15 because there is a time delay in obtaining the second instruction, so in this case, in addition to obtaining the image data within the second preset time period after the second instruction, the image data within the first preset time period before the second instruction needs to be obtained. Based on the scheme, the situation that the acquired second image data is not accurate enough due to time delay in acquiring the second instruction can be prevented.
As an alternative embodiment, in order to save the storage space of the electronic device, after intercepting and obtaining the second image data from the first image data in response to the second instruction based on step S104, the method further includes: and deleting the image data stored in the first storage space.
In the implementation, the plurality of image data included in the first storage space may be deleted, and three of them are listed below for description, but of course, in the implementation, the following three cases are not limited.
The first method, deleting the image data stored in the first storage space, includes: and deleting all the image data stored in the first storage space. The scheme can release the storage space of the electronic equipment to the maximum extent.
Second, the deleting the image data stored in the first storage space includes: deleting the intercepted second image data in the first storage space. Since the second image data is intercepted and stored, it means that the part of the image data will not be used again subsequently, and the part of the image data stored in the first storage space is deleted. According to the scheme, the storage space of the electronic equipment can be released, and meanwhile, the electronic equipment stores the relatively complete first image data, so that the subsequent use of a user is facilitated.
Thirdly, the deleting the image data stored in the first storage space includes: deleting the image data saved in the first storage space before the electronic equipment obtains the first instruction. Since the second instruction is used to obtain and save the second image data, it indicates that the user of the electronic device does not want to obtain and save the image data before the second instruction, and therefore, part of the image data before the second instruction can be deleted. According to the scheme, the storage space for storing the first image data by the electronic equipment can be reduced as much as possible, and meanwhile, the requirement for acquiring the image data by the electronic equipment can be met.
As an alternative embodiment, after the second image data is obtained by intercepting the first image data based on step S104, referring to fig. 2, the method further includes:
step S201: obtaining a third instruction, wherein the third instruction instructs to extract and save image data meeting the first condition from the second image data;
step S202: and responding to the third instruction, extracting image data meeting the first condition from the second image data, and storing the image data meeting the first condition as third image data in a second storage space.
In step S201, the image data satisfying the first condition may be image data in various forms, and two of them are listed below for description, and of course, in the implementation process, the image data is not limited to the following two cases.
First, image data satisfying a first condition is image data corresponding to a first gesture, and in this case, third image data may be obtained by: and intercepting corresponding video data from the second image data based on the obtained first gesture, wherein the video data is the third image data.
The first gesture may be a gesture in multiple forms, and the manner of capturing the corresponding video data from the second image data is also different, two of which are listed below for description, and certainly, in the specific implementation process, the two cases are not limited to the following two cases.
The first gesture includes a first sub-gesture and a second sub-gesture, and the capturing corresponding video data from the second image data based on the obtained first gesture, please refer to fig. 3, including:
step S301: detecting and obtaining a distance value between the first sub-gesture and the second sub-gesture;
step S302: and determining the length corresponding to the progress bar of the video according to the distance value and the mapping relation between the preset distance value and the length of the video progress bar.
In step S301, the first sub-gesture is, for example: the gesture generated by either hand of the user, the second sub-gesture, is for example: a gesture generated by the other hand of the user, such that a distance value between the two hands of the user is a distance value between the first sub-gesture and the second sub-gesture; still alternatively, the first sub-gesture and the second sub-gesture are gestures generated by two fingers of the same hand of the user, so that the distance value between the two fingers of the same hand of the user is the distance value between the first sub-gesture and the second sub-gesture, and so on, although the first sub-gesture and the second sub-gesture may also be other gestures, and the distance values determined based on the first sub-gesture and the second sub-gesture are also different, the embodiment of the present invention is not illustrated in detail and is not limited.
The distance value between the first sub-gesture and the second sub-gesture can be determined through the images.
In step S302, the preset corresponding relationship between the distance value and the length of the video progress bar is, for example: the corresponding relation between the preset distance value range and the video progress bar is shown in table 1; the object relationship may also be a linear formula, such as formula [1], and of course, the correspondence between table 1 and formula [1] is only an example, and the correspondence may also be other relationships, and embodiments of the present invention are not listed in detail and are not limited.
TABLE 1
T=k*l…………………………………………[1]
Wherein, T represents a length (min) corresponding to the progress bar of the video, and the length may also be other length units, which is not limited in the embodiment of the present invention;
l represents a distance value (cm) between the first sub-gesture and the second sub-gesture, and the distance value unit may be other units, which is not limited by the embodiment of the present invention;
k represents a linear coefficient and can be set according to actual requirements.
After obtaining the distance value between the first sub-gesture and the second sub-gesture based on step S301, the length corresponding to the progress bar of the video may be obtained by directly searching in the corresponding relationship through the distance value, for example: if the distance value is 13cm, the length corresponding to the progress bar of the obtained video can be searched for 2min through the corresponding relation of the table 1; the length corresponding to the progress bar of the video obtained through calculation by the formula [1] is 13k (min), and of course, the length corresponding to the progress bar of the video is only taken as an example, and the lengths of the progress bars of the video determined finally are different based on different corresponding relations and different distance values.
Intercepting corresponding video data from the second image data based on the obtained first gesture, referring to fig. 4, including:
step S401: detecting and obtaining a first moving operation of a first hand of the user, and taking an end position of the first moving operation as a starting point of video capture;
step S402: detecting and obtaining a second moving operation of a second hand of the user, and taking the end position of the second moving operation as the end point of video capture;
step S403: and intercepting image data between the starting point and the ending point from the second image data as the video data.
In step S401, the first hand of the user is, for example, any one of the two hands of the user, and the end position of the first moving operation may be determined in various ways, for example: after the detecting obtains a first movement operation of a first hand of the user, the method further comprises: if a first tangential operation aiming at the second image data is detected, the position corresponding to the first tangential operation on the progress bar of the second image data is used as the end position of the first movement operation, the first tangential operation is the movement operation of the first hand relative to the display unit of the electronic equipment from far to near or from top to bottom, wherein the movement operation of the first hand relative to the display unit from far to near can be detected through the depth camera, and the movement operation of the first hand relative to the display unit from top to bottom can be detected through the camera. Further alternatively, a position where the first hand is stopped moving is detected as an end position of the first moving operation, and so on.
In step S402, the second hand of the user is, for example: any hand of the user, the second hand may be the same as or different from the first hand, and the embodiment of the present invention is not limited.
The end position of the second moving operation can also be determined in various ways, for example: after the detecting obtains a second movement operation of a second hand of the user, the method further comprises: if a second tangential operation aiming at the second image data is detected, taking a position corresponding to the second tangential operation on a progress bar of the second image data as an end position of the second movement operation, wherein the second tangential operation is a movement operation of the second hand from far to near or from top to bottom relative to a display unit of the electronic equipment; further alternatively, a position at which the second moving operation stops moving is taken as an end position of the second moving operation, and so on.
In step S403, if the third instruction is a video extraction instruction, the obtained video data may be directly provided to the user as a video extraction result; if the third instruction is an image extraction instruction, after the video data is obtained, the video data may be further decomposed into at least one image, and then the at least one image may be provided to the user as an image extraction result.
Secondly, the image data satisfying the first condition is the image data including the preset feature, in this case, referring to fig. 5, the third image data may be obtained by:
step S501: decomposing the second image data into at least one image;
step S502: judging whether each image in the at least one image contains a preset feature or not;
step S503: and acquiring an image containing the preset features from at least one image as the third image data based on the judgment result.
In step S501, if the second image data is a video, each frame of the video may be regarded as an image, and after each frame of the video is regarded as an image, the image may be directly regarded as at least one image processed in step S502; one image may be extracted from every preset sheet (e.g., 3, 5, etc.) as the image processed in step S502. If the second image data is an image, the image may be directly used as the image processed in step S502, or if the second image data includes a plurality of images, one image may be extracted every preset number (for example, 3, 5, etc.) as the image processed in step S502.
In step S502, the preset features are, for example: a human face, a particular animal, a particular landscape, etc., embodiments of the present invention are not limited. The feature library of the image corresponding to the preset feature may be pre-stored, and then each image in the at least one image is matched with the feature library to determine whether the image contains the preset feature. In addition, the preset feature may be automatically set by the system or manually set by the user, and the embodiment of the present invention is not limited.
In step S503, the image containing the preset features is often an image that the user desires to acquire, so that more accurate image data can be acquired based on the scheme.
In a specific implementation process, the second storage space is, for example: local space of the electronic device, storage space of other devices, etc., such as: after obtaining the third image data, the electronic device may store the third image data locally, send the third image data to a cloud, send the third image data to other electronic devices, and the like, where if the electronic device sends the third image data to other devices, in order to ensure the security of the third image data, a certain encryption mechanism may be adopted for the third image data.
In order to enable those skilled in the art to further understand the data processing method described in the embodiment of the present invention, the method will be described below by taking the example of applying the method to a balance car.
Referring to fig. 6, the data processing method applied to the balance car includes the following steps:
step S601: the method comprises the steps that the balance car detects an operation of controlling the balance car to move by a user, responds to the operation and then moves according to a preset track, and meanwhile, the balance car generates a first instruction based on the operation;
step S602: the balance car responds to the first instruction, and then records first image data through a camera of the balance car, and caches the first image data. Meanwhile, the corresponding LED lamp on the balance car begins to flash so as to prompt the recording of the first image data;
step S603: when the balance car detects that a user opens a video application program and clicks a video button, generating a second instruction based on the starting time and the ending time of the video operation;
step S604: the balance car responds to the second instruction, so that the first image data are obtained from the cache, and the second image data are intercepted from the first image data;
step S605: the balance car provides the second image data to the user through the display unit;
step S606: the balance car detects a first gesture generated by the right hand of the user on the display unit, and determines a starting point and an end point corresponding to the third image data based on the first gesture, wherein the first gesture comprises: the right hand of the user firstly moves in front of the display unit and generates a first tangential operation (a first sub-gesture), and then the right hand of the user moves in front of the display unit and generates a second tangential operation (a second sub-gesture), wherein the position corresponding to the first tangential operation is a starting point, and the position corresponding to the second tangential operation is an end point;
step S607: the balance car generates a third instruction based on the first gesture;
step S608: and the balance car responds to the third instruction, and video data between the starting point and the ending point is extracted from the second image data, and the video data is third image data.
In a second aspect, based on the same inventive concept, an embodiment of the present invention provides an electronic device, please refer to fig. 7, including:
a first obtaining module 70, configured to obtain a first instruction, where the first instruction instructs to perform image acquisition;
a first response module 71, configured to respond to the first instruction, perform image acquisition and obtain first image data, and store the first image data in a first storage space;
a second obtaining module 72, configured to obtain a second instruction, where the second instruction indicates to obtain second image data;
a second responding module 73, configured to intercept and obtain second image data from the first image data in response to the second instruction, where the second image data includes at least one image obtained by the electronic device before obtaining the second instruction.
Optionally, the second image data includes:
the electronic equipment obtains at least one piece of image data within a first preset time period before the second instruction is obtained; or,
the electronic equipment obtains at least one piece of image data within a first preset time period before the second instruction is obtained, and the electronic equipment obtains at least one piece of image data within a second preset time period after the second instruction is obtained.
Optionally, the electronic device further includes:
a third obtaining module, configured to obtain a third instruction, where the third instruction instructs to extract and save image data satisfying the first condition from the second image data;
and the third response module is used for responding to the third instruction, extracting the image data meeting the first condition from the second image data, and storing the image data meeting the first condition as third image data in a second storage space.
Optionally, the third response module is configured to: and intercepting corresponding video data from the second image data based on the obtained first gesture, wherein the video data is the third image data.
Optionally, the first gesture includes a first sub-gesture and a second sub-gesture, and the third response module includes:
the first detection unit is used for detecting and obtaining a distance value between the first sub-gesture and the second sub-gesture;
and the determining unit is used for determining the length corresponding to the progress bar of the video according to the distance value and the mapping relation between the preset distance value and the length of the video progress bar.
Optionally, the third response module includes:
the second detection unit is used for detecting and obtaining a first movement operation of a first hand of the user, and taking the end position of the first movement operation as a starting point of video capture;
a third detecting unit, configured to detect a second moving operation for obtaining a second hand of the user, and use an end position of the second moving operation as an end point of video capture;
an intercepting unit configured to intercept image data located between the start point and the end point from the second image data as the video data.
Optionally, the second detecting unit is further configured to: if a first tangential operation aiming at the second image data is detected, taking a position corresponding to the first tangential operation on a progress bar of the second image data as an end position of the first moving operation, wherein the first tangential operation is a moving operation of the first hand relative to a display unit of the electronic equipment from far to near or from top to bottom; and/or
The third detection unit is further configured to: and if a second tangential operation aiming at the second image data is detected, taking a position corresponding to the second tangential operation on the progress bar of the second image data as an end position of the second movement operation, wherein the second tangential operation is the movement operation of the second hand relative to a display unit of the electronic equipment from far to near or from top to bottom.
Optionally, the third response module includes:
a decomposition unit configured to decompose the second image data into at least one image;
the judging unit is used for judging whether each image in the at least one image contains preset characteristics;
an obtaining unit configured to obtain, as the third image data, an image including the preset feature from at least one image based on a determination result.
Optionally, the electronic device further includes:
and the deleting module is used for deleting the image data stored in the first storage space after the second image data is intercepted from the first image data in response to a second instruction.
Optionally, the deleting module is configured to:
deleting all image data stored in the first storage space; or
Deleting the intercepted second image data in the first storage space; or
Deleting the image data saved in the first storage space before the electronic equipment obtains the first instruction.
Since the electronic device described in the second aspect of the present invention is an electronic device used for implementing the data processing method described in the first aspect of the present invention, based on the data processing method described in the first aspect of the present invention, a person skilled in the art can understand a specific structure and a modification of the electronic device, and thus details are not described here, and all electronic devices used for implementing the data processing method belong to the scope of the embodiments of the present invention to be protected.
One or more embodiments of the invention have at least the following beneficial effects:
in the embodiment of the invention, the first instruction is obtained, and the first instruction indicates image acquisition; responding to the first instruction, executing image acquisition, obtaining first image data, and storing the first image data in a first storage space; obtaining a second instruction, wherein the second instruction indicates that second image data is obtained; and in response to the second instruction, intercepting and obtaining second image data from the first image data, wherein the second image data at least comprises at least one image obtained by the electronic equipment before obtaining the second instruction. That is, when the second instruction for obtaining the second image data is detected, the second image data can be obtained based on the first image data that was previously acquired and saved, thereby achieving a technical effect that the second image data can be obtained using the shooting material before the second instruction was obtained.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (14)

1. A data processing method applied to electronic equipment is characterized by comprising the following steps:
obtaining a first instruction, wherein the first instruction instructs to acquire an image;
responding to the first instruction, executing image acquisition, obtaining first image data, and storing the first image data in a first storage space;
obtaining a second instruction, wherein the second instruction indicates that second image data is obtained;
and in response to the second instruction, intercepting and obtaining second image data from the first image data, wherein the second image data at least comprises at least one image obtained by the electronic equipment before obtaining the second instruction.
2. The method of claim 1, wherein the second image data comprises:
the electronic equipment obtains at least one piece of image data within a first preset time period before the second instruction is obtained; or,
the electronic equipment obtains at least one piece of image data within a first preset time period before the second instruction is obtained, and the electronic equipment obtains at least one piece of image data within a second preset time period after the second instruction is obtained.
3. The method of claim 1, wherein after said truncating from the first image data obtains the second image data, the method further comprises:
obtaining a third instruction, wherein the third instruction instructs to extract and save image data meeting the first condition from the second image data;
and responding to the third instruction, extracting image data meeting the first condition from the second image data, and storing the image data meeting the first condition as third image data in a second storage space.
4. The method according to claim 3, wherein the extracting image data satisfying the first condition from the second image data comprises:
and intercepting corresponding video data from the second image data based on the obtained first gesture, wherein the video data is the third image data.
5. The method of claim 4, wherein the first gesture comprises a first sub-gesture and a second sub-gesture, and wherein intercepting corresponding video data from second image data based on the obtained first gesture comprises:
detecting and obtaining a distance value between the first sub-gesture and the second sub-gesture;
and determining the length corresponding to the progress bar of the video according to the distance value and the mapping relation between the preset distance value and the length of the video progress bar.
6. The method of claim 4, wherein the intercepting corresponding video data from the second image data based on the obtained first gesture comprises:
detecting and obtaining a first moving operation of a first hand of the user, and taking an end position of the first moving operation as a starting point of video capture;
detecting and obtaining a second moving operation of a second hand of the user, and taking the end position of the second moving operation as the end point of video capture;
and intercepting image data between the starting point and the ending point from the second image data as the video data.
7. The method of claim 6, wherein after the detecting obtains a first movement operation of a first hand of the user, the method further comprises: if a first tangential operation aiming at the second image data is detected, taking a position corresponding to the first tangential operation on a progress bar of the second image data as an end position of the first moving operation, wherein the first tangential operation is a moving operation of the first hand relative to a display unit of the electronic equipment from far to near or from top to bottom; and/or
After the detecting obtains a second movement operation of a second hand of the user, the method further comprises: and if a second tangential operation aiming at the second image data is detected, taking a position corresponding to the second tangential operation on the progress bar of the second image data as an end position of the second movement operation, wherein the second tangential operation is the movement operation of the second hand relative to a display unit of the electronic equipment from far to near or from top to bottom.
8. The method of claim 3, wherein said extracting image data satisfying the first condition from the second image data comprises:
decomposing the second image data into at least one image;
judging whether each image in the at least one image contains a preset feature or not;
and acquiring an image containing the preset features from at least one image as the third image data based on the judgment result.
9. The method of any of claims 1-8, wherein after said truncating second image data from said first image data in response to a second instruction, the method further comprises:
and deleting the image data stored in the first storage space.
10. The method of claim 9, wherein the deleting the image data stored in the first storage space comprises:
deleting all image data stored in the first storage space; or
Deleting the intercepted second image data in the first storage space; or
Deleting the image data saved in the first storage space before the electronic equipment obtains the first instruction.
11. An electronic device, comprising:
the device comprises a first obtaining module, a second obtaining module and a control module, wherein the first obtaining module is used for obtaining a first instruction, and the first instruction indicates image acquisition;
the first response module is used for responding to the first instruction, executing image acquisition, obtaining first image data and storing the first image data in a first storage space;
a second obtaining module, configured to obtain a second instruction, where the second instruction indicates to obtain second image data;
and the second response module is used for responding to the second instruction and intercepting and obtaining second image data from the first image data, wherein the second image data at least comprises at least one image obtained by the electronic equipment before the second instruction is obtained.
12. The electronic device of claim 11, wherein the electronic device further comprises:
a third obtaining module, configured to obtain a third instruction, where the third instruction instructs to extract and save image data satisfying the first condition from the second image data;
and the third response module is used for responding to the third instruction, extracting the image data meeting the first condition from the second image data, and storing the image data meeting the first condition as third image data in a second storage space.
13. The electronic device of claim 12, wherein the third response module is configured to: and intercepting corresponding video data from the second image data based on the obtained first gesture, wherein the video data is the third image data.
14. The electronic device of claim 12, wherein the third response module comprises:
a decomposition unit configured to decompose the second image data into at least one image;
the judging unit is used for judging whether each image in the at least one image contains preset characteristics;
an obtaining unit configured to obtain, as the third image data, an image including the preset feature from at least one image based on a determination result.
CN201610201362.6A 2016-03-31 2016-03-31 Data processing method and electronic device Pending CN105827900A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610201362.6A CN105827900A (en) 2016-03-31 2016-03-31 Data processing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610201362.6A CN105827900A (en) 2016-03-31 2016-03-31 Data processing method and electronic device

Publications (1)

Publication Number Publication Date
CN105827900A true CN105827900A (en) 2016-08-03

Family

ID=56525520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610201362.6A Pending CN105827900A (en) 2016-03-31 2016-03-31 Data processing method and electronic device

Country Status (1)

Country Link
CN (1) CN105827900A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951090A (en) * 2017-03-29 2017-07-14 北京小米移动软件有限公司 Image processing method and device
CN107741781A (en) * 2017-09-01 2018-02-27 中国科学院深圳先进技术研究院 Flight control method, device, unmanned plane and the storage medium of unmanned plane

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002271673A (en) * 2001-03-09 2002-09-20 Fuji Photo Film Co Ltd Electronic camera and static image recording method
CN1501695A (en) * 2002-11-18 2004-06-02 矽峰光电科技股份有限公司 Method for amending internal delay in digital camera imaging
CN101309365A (en) * 2007-05-14 2008-11-19 索尼株式会社 Imaging device, method of processing captured image signal and computer program
CN102982557A (en) * 2012-11-06 2013-03-20 桂林电子科技大学 Method for processing space hand signal gesture command based on depth camera
CN103747362A (en) * 2013-12-30 2014-04-23 广州华多网络科技有限公司 Method and device for cutting out video clip
CN104038705A (en) * 2014-05-30 2014-09-10 无锡天脉聚源传媒科技有限公司 Video producing method and device
CN104506937A (en) * 2015-01-06 2015-04-08 三星电子(中国)研发中心 Method and system for sharing processing of audios and videos

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002271673A (en) * 2001-03-09 2002-09-20 Fuji Photo Film Co Ltd Electronic camera and static image recording method
CN1501695A (en) * 2002-11-18 2004-06-02 矽峰光电科技股份有限公司 Method for amending internal delay in digital camera imaging
CN101309365A (en) * 2007-05-14 2008-11-19 索尼株式会社 Imaging device, method of processing captured image signal and computer program
CN102982557A (en) * 2012-11-06 2013-03-20 桂林电子科技大学 Method for processing space hand signal gesture command based on depth camera
CN103747362A (en) * 2013-12-30 2014-04-23 广州华多网络科技有限公司 Method and device for cutting out video clip
CN104038705A (en) * 2014-05-30 2014-09-10 无锡天脉聚源传媒科技有限公司 Video producing method and device
CN104506937A (en) * 2015-01-06 2015-04-08 三星电子(中国)研发中心 Method and system for sharing processing of audios and videos

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951090A (en) * 2017-03-29 2017-07-14 北京小米移动软件有限公司 Image processing method and device
CN107741781A (en) * 2017-09-01 2018-02-27 中国科学院深圳先进技术研究院 Flight control method, device, unmanned plane and the storage medium of unmanned plane

Similar Documents

Publication Publication Date Title
JP6388706B2 (en) Unmanned aircraft shooting control method, shooting control apparatus, and electronic device
EP3079082B1 (en) Method and apparatus for album display
US9904774B2 (en) Method and device for locking file
RU2617393C2 (en) Method and device for file locking
WO2017032086A1 (en) Photograph capturing control method and terminal
KR101819985B1 (en) Method, device, program and computer-readable recording medium for controlling application
WO2019120068A1 (en) Thumbnail display control method and mobile terminal
CN105338409B (en) Network video preloading method and device
JP6423872B2 (en) Video classification method and apparatus
US10303433B2 (en) Portable terminal device and information processing system
KR101656633B1 (en) Method, device, program and recording medium for back up file
US10735697B2 (en) Photographing and corresponding control
CN108848313B (en) Multi-person photographing method, terminal and storage medium
CN106095465B (en) Method and device for setting identity image
EP3641295B1 (en) Shooting interface switching method and apparatus, and device and storage medium thereof
US10769743B2 (en) Method, device and non-transitory storage medium for processing clothes information
US10313537B2 (en) Method, apparatus and medium for sharing photo
CN104408428A (en) Processing method and device for same-scene photos
CN105069426A (en) Similar picture determining method and apparatus
KR20210133104A (en) Method and device for shooting image, and storage medium
CN105827900A (en) Data processing method and electronic device
CN105760075A (en) Operating record creating method and device and intelligent terminal
CN105163141B (en) The mode and device of video recommendations
CN107133551B (en) Fingerprint verification method and device
CN106408560B (en) Method and device for rapidly acquiring effective image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160803