CN115242923A - Data processing method, device, equipment and readable storage medium - Google Patents

Data processing method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN115242923A
CN115242923A CN202210900464.2A CN202210900464A CN115242923A CN 115242923 A CN115242923 A CN 115242923A CN 202210900464 A CN202210900464 A CN 202210900464A CN 115242923 A CN115242923 A CN 115242923A
Authority
CN
China
Prior art keywords
vibration
shooting
type
parameter
configuration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210900464.2A
Other languages
Chinese (zh)
Inventor
赵佳宁
高丽娜
徐士立
孙逊
洪楷
杨奕青
刘思亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210900464.2A priority Critical patent/CN115242923A/en
Publication of CN115242923A publication Critical patent/CN115242923A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a data processing method, a device, equipment and a readable storage medium, wherein the method comprises the following steps: displaying a shooting interface for executing a shooting service; when the shooting interface comprises a shooting object, carrying out focusing processing on the shooting object; and if the shooting object is successfully focused, outputting a vibration prompt aiming at the successful focusing of the shooting object according to the type and/or position of the shooting object to which the shooting object belongs. By adopting the method and the device, focusing prompt can be carried out by vibrating prompt in shooting service, and further the picture quality of a shot picture can be improved.

Description

Data processing method, device, equipment and readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method, apparatus, device, and readable storage medium.
Background
In daily life, various scenes needing to be shot exist, when people take a picture by using intelligent equipment (such as a smart phone), manual focusing can be performed in a certain mode, for example, a certain point in a shooting interface on a screen of the smart phone can be clicked, and focusing is performed according to the point. However, the above focusing schemes are completely performed by vision, and accurate focusing needs to be performed in a scene with relatively reliable visual information.
However, in a scene with less visual information available (for example, in a scene with weak eyesight, weak illumination intensity, and too strong illumination intensity of a photographer), the vision of the photographer is limited, and the picture presented in the shooting interface cannot be accurately acquired, so the photographer cannot usually perform focusing very accurately, and at this time, the shot image is very prone to be out of focus and blurred, that is, the shot picture quality is not high.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device, data processing equipment and a readable storage medium, which can perform focusing prompt by vibrating prompt in a shooting service, so that the picture quality of a shot picture can be improved.
An embodiment of the present application provides a data processing method, including:
displaying a shooting interface for executing a shooting service;
when the shooting interface contains a shooting object, focusing the shooting object;
if the shooting object is successfully focused, outputting a vibration prompt aiming at the successful focusing of the shooting object according to the type and/or the position of the shooting object to which the shooting object belongs; the terminal equipment is equipment for displaying a shooting interface.
An embodiment of the present application provides a data processing apparatus, including:
the interface display module is used for displaying a shooting interface for executing shooting service;
the focusing module is used for focusing the shooting object when the shooting interface contains the shooting object;
the vibration reminding module is used for outputting a focusing success vibration reminding aiming at the shooting object through the terminal equipment if the shooting object is successfully focused; the terminal equipment is equipment for displaying a shooting interface.
In one embodiment, the focus success vibratory alert comprises a first vibratory alert and a second vibratory alert;
the vibration reminding module can comprise:
the object information acquisition unit is used for acquiring the type of a shooting object to which the shooting object belongs and the position of the shooting object in a shooting interface;
the vibration output unit is used for outputting a first vibration prompt with a first vibration effect through the terminal equipment when the system time reaches a first moment; the first vibration effect is used for representing that the shooting object type to which the shooting object belongs to a set object type;
the vibration output unit is also used for outputting a second vibration prompt with a second vibration effect through the terminal equipment when the system time reaches a second moment; the second vibration effect is used for representing that the position of the shooting object in the shooting interface belongs to the set object position; the second time is later than the first time.
In one embodiment, the vibration output unit may include:
the type table acquiring subunit is used for acquiring a type parameter mapping table when the system time reaches a first moment; the type parameter mapping table comprises a mapping relation between a configuration object type set and a configuration vibration effect parameter set, and a mapping relation exists between one configuration object type in the configuration object type set and one configuration vibration effect parameter in the configuration vibration effect parameter set;
the target parameter determining subunit is used for determining a configuration object type which is the same as the shooting object type in the configuration object type set as a first target configuration object type;
the target parameter determining subunit is further configured to determine, in the configured vibration effect parameter set, a configured vibration effect parameter having a mapping relationship with the first target configuration object type as a first target vibration effect parameter corresponding to the shooting object type;
a vibration output subunit for determining a vibration effect indicated by the first target vibration effect parameter as a first vibration effect;
and the vibration output subunit is also used for outputting a first vibration prompt in the terminal equipment according to the first target vibration effect parameter.
In one embodiment, the first target vibration effect parameter includes N vibration intensity parameters, and a vibration order corresponding to the N vibration intensity parameters;
the vibration output subunit is further specifically configured to sort the N vibration intensity parameters according to a vibration sequence corresponding to the N vibration intensity parameters, so as to obtain an intensity parameter sequence;
and the vibration output subunit is further specifically configured to perform vibration reminding in the terminal device in sequence according to the vibration intensity parameters in the intensity parameter sequence.
In one embodiment, the vibration output unit may include:
an initial table obtaining subunit, configured to obtain an initial type parameter mapping table; the initial type parameter mapping table comprises a mapping relation between a configuration object type set and an initial configuration vibration effect parameter set, and a mapping relation exists between one configuration object type in the configuration object type set and one initial configuration vibration effect parameter in the initial configuration vibration effect parameter set;
the to-be-replaced parameter obtaining subunit is configured to, when receiving parameter modification information, sent by the terminal device, for a second target configuration object type in the configuration object type set, obtain, in the initial configuration vibration effect parameter set, a second target vibration effect parameter having a mapping relationship with the second target configuration object type; the parameter modification information comprises a modified vibration effect parameter of the type of the second target configuration object;
the parameter replacement subunit is used for replacing the second target vibration effect parameter in the initial configuration vibration effect parameter set with the modified vibration effect parameter to obtain a configuration vibration effect parameter set;
and the type table determining subunit is used for determining an initial type parameter mapping table containing the mapping relation between the configuration object type set and the configuration vibration effect parameter set as the type parameter mapping table.
In one embodiment, the vibration output unit is further specifically configured to obtain a position parameter mapping table when the system time reaches a second time; the position parameter mapping table comprises a mapping relation between a configuration object position set and a configuration position vibration parameter set, and a mapping relation exists between one configuration object position in the configuration object position set and one configuration position vibration parameter in the configuration position vibration parameter set;
the vibration output unit is further specifically used for determining the position of the configuration object, which is the same as the position of the shooting object, in the configuration object position set as the target configuration object position;
the vibration output unit is further specifically used for determining the configuration position vibration parameters in the configuration position vibration parameter set, which have a mapping relation with the position of the target configuration object, as the target position vibration parameters corresponding to the position of the shooting object;
and the vibration output unit is further specifically configured to determine the vibration effect indicated by the target position vibration parameter as a second vibration effect, and output a second vibration prompt in the terminal device according to the target position vibration parameter.
In one embodiment, the number of the photographic subjects is at least two, and the types of the photographic subjects to which each photographic subject belongs are the same; the focusing success vibration prompt comprises a first vibration prompt, a second vibration prompt and a third vibration prompt;
the vibration reminding module can comprise:
the object association information acquisition unit is used for acquiring the type of the shooting object to which the shooting object belongs and the position of the shooting object in the shooting interface;
the vibration reminding unit is used for outputting a first vibration reminding with a first vibration effect through the terminal equipment when the system time reaches a first moment; the first vibration effect is used for representing that the shooting object type to which the shooting object belongs to a set object type;
the vibration reminding unit is also used for outputting a second vibration reminding with a second vibration effect through the terminal equipment when the system time reaches a second moment; the second vibration prompt is used for representing that the position of the shooting object in the shooting interface belongs to the set object position; the second time is later than the first time;
the vibration reminding unit is also used for outputting a third vibration reminding with a third vibration effect through the terminal equipment when the system time reaches a third moment; the third vibration effect is used for representing the number of at least two shooting objects as the number of set objects; the third time is later than the second time.
In one embodiment, the photographic subject includes a first photographic subject and a second photographic subject, the photographic subject type to which the first photographic subject belongs is different from the photographic subject type to which the second photographic subject belongs; the focusing success vibration reminding aiming at the shooting objects comprises a first object vibration reminding aiming at a first shooting object and a second object vibration reminding aiming at a second shooting object; the first object vibration reminder comprises a first sub vibration reminder and a second sub vibration reminder; the second object vibration prompt comprises a third sub vibration prompt and a fourth sub vibration prompt;
the vibration reminding module can comprise:
the first object reminding unit is used for outputting a first sub-vibration reminder with a first sub-vibration effect through the terminal equipment when the system time reaches a first sub-moment; the first sub-vibration effect is used for representing that the shooting object type to which the first shooting object belongs to a first set object type;
the first object reminding unit is also used for outputting a second sub-vibration reminding with a second sub-vibration effect through the terminal equipment when the system time reaches a second sub-moment; the second vibration prompt is used for representing that the shooting object position of the first shooting object in the shooting interface belongs to a first set object position; the second sub-time is later than the first sub-time;
the second object reminding unit is used for outputting a third sub-vibration reminding with a third sub-vibration effect through the terminal equipment when the system time reaches a third sub-moment; the third sub-vibration effect is used for representing that the type of the shooting object to which the second shooting object belongs to a second set object type; the third sub-time is later than the second sub-time;
the second object reminding unit is also used for outputting a fourth sub-vibration reminding with a fourth sub-vibration effect through the terminal equipment when the system time reaches a fourth sub-moment; the fourth sub-vibration effect is used for representing that the position of a second shooting object in the shooting interface belongs to the position of a second set object; the fourth sub-time is later than the third sub-time; the time difference between the fourth sub-instant and the third sub-instant is the same as the time difference between the second sub-instant and the first sub-instant.
In one embodiment, the data processing apparatus may further include:
the area acquisition module is used for acquiring an object area where a shooting object is located in a shooting interface;
the area acquisition module is used for acquiring the area corresponding to the object area and the interface area corresponding to the shooting interface;
the display effect determining module is used for determining the shooting display effect of the shot object according to the area of the region and the area of the interface;
the display effect reminding module is used for outputting reasonable vibration reminding if the shooting display effect is a reasonable display effect;
and the presentation effect reminding module is also used for outputting abnormal vibration reminding if the shooting presentation effect is an abnormal presentation effect.
In one embodiment, the presentation effect determination module may include:
a ratio determination unit for determining an area ratio between the area of the region and the area of the interface;
the first effect determining unit is used for determining the shooting presenting effect of the shooting object as an abnormal presenting effect if the area ratio is larger than the ratio threshold;
and the second effect determining unit is used for determining the shooting presenting effect of the shooting object as a reasonable presenting effect if the area ratio is smaller than the ratio threshold.
In one embodiment, the data processing apparatus may further include:
and the failure reminding module is used for outputting the focusing failure vibration reminding aiming at the shooting object if the shooting object fails to focus.
In one embodiment, the data processing apparatus may further include:
the type acquisition module is used for acquiring the type of the shot object to which the shot object belongs;
the analog audio generating module is used for generating object analog audio aiming at the shot object according to the shot object type;
and the audio output module is used for synchronously outputting the object simulation audio when the focusing success vibration prompt aiming at the shooting object is output.
In one embodiment, the data processing apparatus may further include:
the object type acquisition module is used for acquiring the shooting object type of the shooting object;
the key part acquisition module is used for acquiring key part data of the shot object in the shooting interface if the shot object type is the specified object type;
the emotion prediction module is used for inputting the key part data into the recognition model and determining the predicted emotion of the shooting object through the recognition model and the key part data;
and the emotion reminding module is used for synchronously outputting the emotion vibration reminding aiming at the predicted emotion when outputting the focusing success vibration reminding aiming at the shooting object if the predicted emotion is the specified emotion.
An aspect of an embodiment of the present application provides a computer device, including: a processor and a memory;
the memory stores a computer program that, when executed by the processor, causes the processor to perform the method in the embodiments of the present application.
In one aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, where the computer program includes program instructions, and when the program instructions are executed by a processor, the method in the embodiments of the present application is performed.
In one aspect of the present application, a computer program product is provided, the computer program product comprising a computer program stored in a computer readable storage medium. The processor of the computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the computer device executes the method provided by the aspect of the embodiment of the present application.
In the embodiment of the application, in a shooting interface for executing a shooting service, when the shooting interface contains a shooting object, focusing processing can be performed on the shooting object, and if the shooting object is focused successfully, a vibration prompt for focusing success of the shooting object can be output according to the type of the shooting object and/or the position of the shooting object to which the shooting object belongs. It should be understood that, in the focusing process of the shooting service, the auxiliary function of vibration reminding can be added, when focusing is successful, the vibration reminding of successful focusing can be output, and the focusing result (type of the shooting object and/or position of the shooting object) during shooting can be clearly, definitely and timely fed back, so that a non-defocused and clear shot image (shooting picture) can be well obtained under the condition of successful focusing, and the quality of the shot image can be improved. In conclusion, the method and the device can perform focusing feedback through vibration lifting in the shooting service, so that the picture quality of the shot picture can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a diagram of a network architecture provided in an embodiment of the present application;
fig. 2 is a schematic view of a scene for executing a shooting service according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a vibration prompt for a shooting presentation effect according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of synchronously outputting a vibration reminder and an audio reminder according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of synchronously outputting a vibration reminder and an emotional reminder according to an embodiment of the present application;
FIG. 7 is a flow chart of a system provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Referring to fig. 1, fig. 1 is a diagram of a network architecture according to an embodiment of the present disclosure. As shown in fig. 1, the network architecture may include a service server 1000 and a terminal device cluster, and the terminal device cluster may include one or more terminal devices, where the number of terminal devices is not limited herein. As shown in fig. 1, the plurality of terminal devices may include a terminal device 100a, a terminal device 100b, a terminal device 100c, \ 8230, a terminal device 100n; as shown in fig. 1, terminal device 100a, terminal device 100b, and terminal devices 100c, \ 8230, terminal device 100n may be respectively in network connection with service server 1000, so that each terminal device may perform data interaction with service server 1000 through the network connection.
It is understood that each terminal device shown in fig. 1 may be installed with a target application, and when the target application runs in each terminal device, data interaction may be performed between the target application and the service server 1000 shown in fig. 1, respectively, so that the service server 1000 may receive service data from each terminal device. The target application may include an application having a function of displaying data information such as text, image, audio, and video, and the application may be any application capable of performing a shooting service, for example, the application may be an image beautifying application, a short video application, a shooting application, and the like, which is not illustrated herein.
In the embodiment of the present application, one terminal device may be selected from a plurality of terminal devices as a target terminal device, and the terminal device may include: the smart terminal may be a smart terminal that carries a multimedia data processing function (e.g., a video data playing function, a music data playing function, a text data playing function), such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart television, a smart speaker, a desktop computer, a smart watch, and a smart car, but is not limited thereto. For example, the terminal device 100a shown in fig. 1 may be used as the target terminal device, and the target terminal device may be integrated with the target application, and at this time, the target terminal device may perform data interaction with the service server 1000 through the target application. The service server 1000 in the present application may obtain the service data according to the applications, for example, the service server 1000 may obtain the service data through a binding account of a user. The binding account can refer to an account bound by a user in an application; the user can log in the application, upload data, acquire data and the like through the corresponding binding account, and the service server can also acquire the login state, upload data, send data and the like of the user through the binding account.
It should be understood that when a user starts a target application (such as a shooting application) in a terminal device, the terminal device may display a shooting interface for executing a shooting service (or, a shooting mode control for entering the shooting service may be provided in the target application, and when the user starts the target application and generates a trigger operation on the shooting mode control in the target application, the terminal device may display the shooting interface for executing the shooting service). At this time, the user may shoot an object (such as a person, an animal, a plant, etc.) desired to be shot through the mobile terminal device, where the object desired to be shot may be referred to as a shot object, and when the shot object enters the shooting interface, that is, when the shooting interface includes the shot object, the terminal device may automatically perform focusing on the shot object (of course, the user may also manually perform focusing on the shot object, for example, the user may perform manual focusing through a certain trigger operation, where the trigger operation is, for example, a trigger operation on a certain position point in the shooting interface). In the present application, different focusing success vibration parameters may be configured for objects of different object types (such as a person type, an animal type, a plant type, a building type, and the like), so that when the above-mentioned shooting object is successfully focused, the terminal device may acquire the shooting object type to which the shooting object belongs, acquire the focusing success vibration parameter corresponding to the shooting object based on the shooting object type, and then output a focusing success vibration alert for the shooting object based on the focusing success vibration parameter (when the focusing success vibration alert is output, it may be output in any associated device associated with the terminal device; for example, a focusing success vibration alert may be output in the terminal device itself; also a focusing success vibration alert may be output in a bracelet device, a shooting remote control device, and the like associated with the terminal device; further, the present application may also configure different focusing success vibration parameters for different object positions (an object position may be understood as a position where the shooting object is located in the shooting interface, such as a centered position, an upper position, a lower position, a left position, a right position, and the like), so that when the above-mentioned shooting object is successfully captured, the terminal device may output different focusing success vibration parameters for the focusing object position of the shooting object in the shooting interface, and then output the corresponding vibration parameter for the focusing object based on the focusing position of the shooting object that the vibration parameter corresponding to the shooting object successfully obtained, and then output the vibration parameter of the shooting object, it is also possible to output only the focus success vibration alert for the type of the photographic subject or the focus success vibration alert for the position of the photographic subject. Based on this, for a user (which may be called a photographer) who desires to execute a shooting service, it can be determined that the shooting object has been successfully focused at this time through the vibration prompt for focusing success (through the vibration prompt of different parameters, the photographer can also determine the type of the shooting object and/or the position of the shooting object to which the shooting object belongs at the same time), and a shooting image can be obtained through the trigger operation of the shooting control. It should be understood that, since it is clear at this time that the photographic subject in the photographic interface is in the focusing success state, the obtained photographic image is also an unfocused and clearer image at this time, that is, the image quality of the photographic image is higher.
In the application, for example, after a shooting object is successfully focused, only a focusing success vibration prompt for a shooting object type is output, and focusing success vibration parameters corresponding to different object types may be stored by the service server 1000, so that after the terminal device identifies a shooting object type to which the shooting object belongs in a shooting interface, the terminal device may send the shooting object type to the service server 1000, and then the service server 1000 may obtain the focusing success vibration parameter corresponding to the shooting object type and return the focusing success vibration parameter to the terminal device, so that the terminal device may output a corresponding vibration prompt based on the focusing success vibration parameter to prompt a photographer that the shooting object has been successfully focused at this time. Naturally, optionally, focusing success vibration parameters corresponding to different object types may also be stored by the terminal device, so that after the terminal device identifies the shooting object type to which the shooting object belongs in the shooting interface, the terminal device may directly obtain the focusing success vibration parameter corresponding to the shooting object type, and output a corresponding vibration prompt based on the focusing success vibration parameter, so as to prompt the photographer that the shooting object has been successfully focused at this time.
It should be understood that when a user desires to photograph a certain object, a corresponding photographing service may be executed in a target application (e.g., a photographing application), and in a photographing interface for executing the photographing service, if the photographing object is successfully focused, the present application may prompt the photographing auxiliary prompt information through vibration to prompt the successful focusing, so that a prompt of the successful focusing may be timely and accurately sent, a photographer may be well assisted in real time and accurately to photograph, a photographed image with high quality may be obtained, and a display effect of the photographed image may be optimized.
It should be noted that the triggering operation for each control in the present application may include a contact operation such as a click or a long press, or may also include a non-contact operation such as a voice or a gesture, and this is not limited here.
It is understood that the method provided by the embodiment of the present application may be executed by a computer device, which includes but is not limited to a terminal device or a service server. The service server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, big data and an artificial intelligence platform.
The terminal device and the service server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
Alternatively, it is understood that the computer device (the service server 1000, the terminal device 100a, the terminal device 100b, and the like) may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting multiple nodes through a network communication form. The P2P Protocol is an application layer Protocol operating on a Transmission Control Protocol (TCP) Protocol. In a distributed system, any form of computer device, such as a business server, an electronic device such as a terminal device, etc., may become a node in the blockchain system by joining the peer-to-peer network. For ease of understanding, the concept of blockchains will be explained below: the block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism and an encryption algorithm, and is mainly used for sorting data according to a time sequence and encrypting the data into an account book, so that the data cannot be falsified and forged, and meanwhile, the data can be verified, stored and updated. When the computer device is a block chain node, due to the characteristics that the block chain cannot be tampered and the characteristics of forgery prevention, the data (such as focusing success vibration parameters corresponding to the object type, shot images and the like) in the application can have authenticity and safety, and therefore the obtained result is more reliable after the relevant data processing is carried out on the basis of the data.
It should be noted that, in the specific embodiment of the present application, user-related data (such as the binding account of the user, data uploaded by the user, an image taken by the user, and the like) related to user information, user data, and the like are acquired and processed after the right granted by the user is acquired. That is, when embodiments of the present application are applied to a specific product or technology, user permission or consent needs to be obtained, and the collection, use, and handling of relevant data needs to comply with relevant laws and regulations and standards in relevant countries and regions.
For ease of understanding, please refer to fig. 2, where fig. 2 is a schematic view of a scenario for performing a shooting service according to an embodiment of the present application. The terminal device 100a shown in fig. 2 may be the terminal device 100a in the terminal device cluster according to the embodiment corresponding to fig. 1.
Taking the target application as the shooting application, as shown in fig. 2, the terminal device 100a may be a terminal device corresponding to the user a, and the user a may start the shooting application in the terminal device 100a. Subsequently, the terminal device 100a may display a shooting interface (such as the shooting interface 2001 shown in fig. 2) for executing a shooting service, where the shooting interface 2001 may include a shooting frame 200 and various controls, such as a shooting mode control (which may include a photo mode control, a portrait mode control, and a video mode control) and a shooting control 2a, and the user a may select a corresponding shooting mode by a trigger operation on the shooting mode control, where, taking the shooting mode selected by the user a as the photo mode as an example, the user a may enable mirror image data of an object desired to be shot to be presented in the shooting interface 2001 by moving the terminal device 100a (actually, the object desired to be shot may be presented in the shooting frame 200 in the shooting interface 2001).
As shown in fig. 2, when a subject 20a desired to be photographed is included in the photographing interface 2001 (the subject 20a may be referred to as a photographing subject), the terminal device 100a may automatically perform an autofocus process on the photographing subject 20 a. Subsequently, if the photographic subject 20a succeeds in focusing, the terminal device 100a may output a focusing success vibration alert for the photographic subject 20a (for example, as shown in fig. 2, the terminal device 100a may perform a vibration alert). It should be understood that by using the vibration prompt to prompt the auxiliary prompt information, the user a can shoot the photographic object 20a by the trigger operation of the shooting control 2a if the user a has focused successfully at this time. When the user a triggers the shooting control 2a, the obtained shooting picture (shooting image) also has higher quality (for example, the shooting object has the characteristics of no defocusing, no blurring and enough clarity).
In the above-described focusing process on the photographic subject 20a, the user a may manually perform the focusing process on the photographic subject 20a by a trigger operation on a certain position point (e.g., a certain point in a partial region where the photographic subject 20a is located) in the photographic interface 2001. The present application does not limit the manner of focusing processing on a photographic subject. It should be further noted that, when the photographic subject 20a successfully focuses, the terminal device 100a may determine, according to the type of the subject to which the photographic subject 20a belongs, vibration parameters (for example, parameters such as vibration intensity and vibration frequency) of a focusing success vibration alert for the photographic subject 20a, so as to prompt the photographer in real time to which type of subject the successfully focused subject belongs. For example, as shown in fig. 2, the type of the object to which the photographic subject 20a belongs may be a person type, and the terminal device 100a may output a vibration alert corresponding to the person type, so that not only can the photographer be prompted that the photographic subject has focused successfully at this time, but also what type of the object to which the photographic subject belongs at this time can be prompted at the same time. Alternatively, when the photographic subject 20a is successfully focused, the terminal device 100a may determine a vibration parameter (for example, parameters such as vibration intensity, vibration number, and the like) of a vibration alert for focusing success for the photographic subject 20a according to the subject position of the photographic subject 20a in the photographic interface 2001, so as to prompt the photographer in real time of the position of the subject in the photographic interface where the focusing is successful. For example, if the position of the photographic subject 20a in the photographic interface 2001 shown in fig. 2 is a non-centered position, the terminal device 100a may output a vibration alert corresponding to the non-centered position, so as to prompt the photographer that the photographic subject has focused successfully at this time, and also prompt that the position of the photographic subject is not centered at this time. It should be noted that, the terminal device may output the vibration alert of the object type and the vibration alert of the object position. For example, when the photographic subject 20a successfully focuses, the terminal device 100a may perform different vibration alerts sequentially according to the type of the subject to which the photographic subject 20a belongs and the position of the subject. Therefore, the shooting person can be prompted in real time, and the type and the position of the object which is successfully focused at present can be prompted.
It should be understood that when a user desires to photograph a certain object, a corresponding photographing service may be executed in a target application (e.g., a photographing application), and in a photographing interface for executing the photographing service, if the photographing object is successfully focused, the present application may prompt the photographing auxiliary prompt information through vibration to prompt the successful focusing, so that a prompt of the successful focusing may be timely and accurately sent, a photographer may be well assisted in real time and accurately to photograph, a photographed image with high quality may be obtained, and a display effect of the photographed image may be optimized.
Further, please refer to fig. 3, where fig. 3 is a schematic flow chart of a data processing method according to an embodiment of the present application. The method may be executed by a terminal device (for example, any terminal device in the terminal device cluster 100 shown in fig. 1) or a service server (for example, the service server 1000 shown in fig. 1) or may be executed by both the terminal device and the service server. For ease of understanding, the present embodiment is described as an example in which the method is executed by the terminal device described above. Wherein, the data processing method at least comprises the following steps S101-S103:
and step S101, displaying a shooting interface for executing the shooting service.
In the application, after the user starts the target application, the terminal device may display a shooting interface for executing a shooting service. The target application may be any application with a shooting function, for example, the target application may be a social application, a short video application, an image beautification application, a special effect application, a shooting application, and the like. The target application may be a standalone application, or may be an embedded sub-application in applications such as a video application, an entertainment application, a shopping application, and the like, which shall not be limited herein. Of course, the target application may provide a shooting service control for executing the shooting service, and after the target application is started, the user may enter the shooting interface for executing the shooting service by triggering the shooting service control.
Specifically, in a scenario, taking a target application as a shooting application as an example, after a user starts the target application, the terminal device may display a shooting interface including a shooting frame for executing a shooting service, and the user may include an object desired to be shot in the shooting frame by moving the terminal device (of course, the object desired to be shot may also be included in the shooting frame by moving the object to be shot). For example, as the embodiment corresponding to fig. 2 can be taken as an exemplary scenario, when the user a starts the shooting application, the shooting interface 2001 for executing the shooting service can be viewed, the shooting interface 2001 may include the shooting box 200, and the user may execute the shooting service based on the shooting box 200. It should be noted that the area of the shooting frame may be smaller than or equal to the interface area of the shooting interface, that is, the shooting frame may be displayed in full screen.
And step S102, when the shooting interface contains the shooting object, carrying out focusing processing on the shooting object.
In the present application, a subject may be a subject that a photographer desires to photograph, and when a subject a desires to photograph another subject b, the subject a may be referred to as a photographer (or a photographing control subject), and the subject b may be referred to as a photographing subject. When the shooting interface contains the shooting object, the terminal device can perform focusing processing on the shooting object. The focusing process may be an automatic focusing process of the terminal device, or may refer to a manual focusing process performed by a photographer in a certain manner, where the manual focusing process of the photographer may be a manner of triggering an operation on a certain control or a certain area in the shooting interface, or may be a voice control manner, or may be any other manner (e.g., a non-contact manner such as a gesture) capable of performing the manual focusing process, and the manner of the focusing process is not limited in the present application.
And step S103, if the shooting object is successfully focused, outputting a vibration prompt aiming at the successful focusing of the shooting object according to the type and/or the position of the shooting object to which the shooting object belongs.
In the application, if the shooting object is successfully focused, the terminal device can output a vibration prompt aiming at the focusing success of the shooting object. According to the method and the device, different vibration effect parameters can be configured for different object types, so that when the shooting object is focused successfully, the vibration effect parameters of the shooting object can be determined according to the type of the shooting object to which the shooting object belongs, and the terminal equipment carries out vibration reminding according to the vibration effect parameters. In addition, the terminal device may output a vibration alert for the object type of the photographic object, and may also output a vibration alert for the object position of the photographic object in the photographic interface (the vibration alert may be used to prompt the photographer, and the photographic object is currently in a center position, an upper position, a lower position, a left position, or a right position in the photographic interface). The terminal equipment can perform focusing success vibration reminding only according to the shooting object type to which the shooting object belongs or the shooting object position, or perform focusing success vibration reminding together based on the shooting object type to which the shooting object belongs and the shooting object position, and when performing focusing success vibration reminding together based on the shooting object type and the shooting object position, the sequence of the vibration reminding of the shooting object type and the vibration reminding of the shooting object position is not limited.
Specifically, the vibration alert used for prompting the type of the object of the shooting object may be referred to as a first vibration alert, and the vibration alert used for prompting the position of the object of the shooting object in the shooting interface may be referred to as a second vibration alert, that is, the vibration alert for successful focusing of the shooting object may include the first vibration alert and the second vibration alert. At this time, a specific implementation manner for outputting a focusing success vibration prompt for the shooting object through the terminal device may be as follows: the type of the shooting object to which the shooting object belongs and the position of the shooting object in the shooting interface can be obtained; when the system time reaches a first moment, a first vibration prompt with a first vibration effect can be output through the terminal equipment; the first vibration effect is used for representing that the shooting object type to which the shooting object belongs to a set object type; when the system time reaches a second moment, a second vibration prompt with a second vibration effect can be output through the terminal equipment; the second vibration effect is used for representing that the shooting object position of the shooting object in the shooting interface belongs to the set object position; the second time is later than the first time.
The setting object type may include a type used for describing an object, and may specifically include a person type, an animal type, a plant type, a building type, and the like, different vibration effect parameters may be configured for different setting object types, so that different setting object types may have different vibration effects, and when it is determined that the shooting object type to which the shooting object belongs is a setting object type, the first vibration reminder may have a vibration effect (e.g., a first vibration effect) corresponding to the setting object type. Similarly, the set object position may refer to a position of the object in the shooting interface, and specifically may include a center position, an upper position, a lower position, a left position, a right position, and the like, in which different vibration effect parameters may be configured for different set object positions, so that different set object positions may have different vibration effects, and when it is determined that the position of the shooting object in the shooting interface is a set object position, the second vibration reminder may have a vibration effect (e.g., a second vibration effect) corresponding to the set object position. It should be understood that the present application may preferentially perform the vibration alert with respect to the type of the photographic subject (e.g., outputting the first vibration alert at a first time), and then perform the vibration alert with respect to the subject position of the photographic subject at a certain time interval (e.g., at an interval of 200ms, 100ms \8230;), and then perform the vibration alert with respect to the subject position of the photographic subject (e.g., outputting the second vibration alert at a second time). Of course, the present application may also perform vibration alert with respect to the object position of the object preferentially, and then perform vibration alert with respect to the object type of the object, and the order of vibration alert for the object type and the object position is not limited in the present application.
According to the method and the device, different vibration effect parameters can be configured for different set object types, so that when the shooting object type to which the shooting object belongs is determined, the corresponding vibration effect parameters can be found based on the shooting object type to which the shooting object belongs, and then vibration reminding is carried out based on the vibration effect parameters. Specifically, when the system time reaches the first time, the specific implementation manner of outputting the first vibration prompt with the first vibration effect through the terminal device may be: when the system time reaches a first moment, a type parameter mapping table can be obtained; the type parameter mapping table comprises a mapping relation between a configuration object type set and a configuration vibration effect parameter set, and a mapping relation exists between one configuration object type in the configuration object type set and one configuration vibration effect parameter in the configuration vibration effect parameter set; subsequently, a configuration object type which is the same as the shooting object type in the configuration object type set can be determined as a first target configuration object type; then, the configured vibration effect parameter having a mapping relation with the first target configured object type in the configured vibration effect parameter set may be determined as the first target vibration effect parameter corresponding to the shooting object type; and finally, determining the vibration effect indicated by the first target vibration effect parameter as the first vibration effect, and outputting a first vibration prompt in the terminal equipment according to the first target vibration effect parameter.
The vibration effect parameters in the present application may include a vibration intensity parameter, a vibration frequency parameter, and the like, that is, each configured vibration effect parameter may include a vibration intensity parameter, and the present application may configure different vibration times and a vibration intensity parameter corresponding to each vibration for different vibration reminders, that is, to a certain vibration reminder (for example, to a first vibration reminder of an object type), vibration may be performed for multiple times according to different vibration intensity parameters. That is to say, each configured vibration effect parameter in the present application may specifically include N (N is a positive integer) vibration intensity parameters, and may also include a vibration sequence corresponding to the N vibration intensity parameters. Correspondingly, the first target vibration effect parameter also includes N vibration intensity parameters and a vibration sequence corresponding to the N vibration intensity parameters, and at this time, a specific implementation manner for outputting the first vibration alert according to the first target vibration effect parameter in the terminal device may be: sequencing the N vibration intensity parameters according to the vibration sequence corresponding to the N vibration intensity parameters to obtain an intensity parameter sequence; subsequently, in the terminal device, vibration reminding can be performed according to the vibration intensity parameters in the intensity parameter sequence in sequence.
For example, the vibration intensity parameters included in the first target vibration effect parameter are vibration intensity parameter 1 and vibration intensity parameter 2, and the vibration sequence of vibration intensity parameter 1 and vibration intensity parameter 2 is { vibration intensity parameter 2, vibration intensity parameter 1}, so that when the first vibration prompt for the type of the photographic object is output, first vibration may be performed according to vibration intensity parameter 2, and then vibration may be performed again according to vibration intensity parameter 1 at a certain interval (where the interval may be set to be short, for example, 5ms, 10ms, and the like), and then vibration performed twice before and after may be used to represent that the type of the photographic object is the set object type. It should be noted that, in the present application, the content included in each configured vibration effect parameter may also include other parameters (for example, the vibration duration of each vibration) besides the vibration intensity parameter and the vibration sequence, which is described herein by way of example only, and the specific content included in each configured vibration effect parameter is not limited in the present application.
It should be noted that, for each configuration object type (which may also be referred to as a setting object type), the present application configures a configuration vibration effect parameter for the configuration object type, where the configuration vibration effect parameter may be used as an initial configuration vibration effect parameter, and a user may modify the initial configuration vibration effect parameter according to different requirements, and the configuration vibration effect parameter included in the type parameter mapping table may include the initial configuration vibration effect parameter, and may also include a modified configuration vibration effect parameter (which may be referred to as a modified vibration effect parameter for convenience of distinction). Taking the example that a user corresponding to the terminal device modifies a certain initially configured vibration effect parameter, a specific implementation manner for determining the type parameter mapping table may be: an initial type parameter mapping table can be obtained; the initial type parameter mapping table may include a mapping relationship between a configuration object type set and an initial configuration vibration effect parameter set, and a mapping relationship exists between one configuration object type in the configuration object type set and one initial configuration vibration effect parameter in the initial configuration vibration effect parameter set; when parameter modification information aiming at a second target configuration object type in a configuration object type set sent by a terminal device is received, a second target vibration effect parameter having a mapping relation with the second target configuration object type can be obtained in an initial configuration vibration effect parameter set; wherein the parameter modification information may include a modified vibration effect parameter of the second target configuration object type; then, replacing the second target vibration effect parameter in the initial configuration vibration effect parameter set with the modified vibration effect parameter to obtain a configuration vibration effect parameter set; and then, determining an initial type parameter mapping table containing the mapping relation between the configuration object type set and the configuration vibration effect parameter set as a type parameter mapping table.
According to the method and the device, different vibration effect parameters can be configured for different set object types, so that when the shooting object type of the shooting object is determined, the corresponding vibration effect parameters can be found based on the shooting object type of the shooting object, and then vibration reminding is carried out based on the vibration effect parameters. In a similar way, different vibration effect parameters can be configured for different set object positions, so that when the position of the shooting object in the shooting interface is determined, the corresponding vibration effect parameters can be found based on the position of the shooting object, and then vibration reminding is performed based on the vibration effect parameters. That is to say, when the system time reaches the second time, a specific implementation manner of outputting the second vibration alert having the second vibration effect through the terminal device may be: when the system time reaches a second moment, a position parameter mapping table can be obtained; the position parameter mapping table comprises a mapping relation between a configuration object position set and a configuration position vibration parameter set, and a mapping relation exists between one configuration object position in the configuration object position set and one configuration position vibration parameter in the configuration position vibration parameter set; subsequently, the position of the configuration object which is the same as the position of the shooting object in the configuration object position set can be determined as a target configuration object position; determining a configuration position vibration parameter which has a mapping relation with the position of a target configuration object in the configuration position vibration parameter set as a target position vibration parameter corresponding to the position of the shooting object; and then, determining the vibration effect indicated by the target position vibration parameter as a second vibration effect, and outputting a second vibration prompt in the terminal equipment according to the target position vibration parameter.
Correspondingly, for each configuration object position (which may also be referred to as a setting object position), the present application configures a configuration position vibration parameter for the configuration object position, where the configuration position vibration parameter may be used as an initial configuration position vibration parameter, and a user may modify the initial configuration position vibration parameter according to different requirements, and the configuration position vibration parameter included in the type parameter mapping table may include the initial configuration position vibration parameter, and may also include a modified configuration position vibration parameter (for convenience of distinction, it may be referred to as a modified position vibration parameter). The detailed process will not be described in detail herein.
Optionally, as can be seen from the above, when the shooting object is subjected to vibration reminding, vibration reminding with different vibration effects can be performed according to different object types, and vibration reminding with different vibration effects can also be performed according to different object positions. In addition, when the shooting interface comprises at least two shooting objects and the types of the objects to which the at least two shooting objects belong are the same, the method and the device can perform vibration reminding according to the number of the objects. That is, the focus success vibration alert may include a third vibration alert for the number of the photographic subjects in addition to the first vibration alert for the type of the photographic subject and the second vibration alert for the position of the photographic subject.
That is to say, under the condition that the number of the shooting objects is at least two and the types of the shooting objects to which each shooting object belongs are the same, the focusing success vibration alert for the shooting objects may specifically include a first vibration alert, a second vibration alert, and a third vibration alert, and at this time, a specific implementation manner for outputting the focusing success vibration alert for the shooting objects through the terminal device may be: the shooting object type of the shooting object and the shooting object position of the shooting object in the shooting interface can be obtained; when the system time reaches a first moment, a first vibration prompt with a first vibration effect can be output through the terminal equipment; the first vibration effect is used for representing that the shooting object type to which the shooting object belongs to a set object type; when the system time reaches a second moment, a second vibration prompt with a second vibration effect can be output through the terminal equipment; the second vibration prompt is used for representing that the shooting object position where the shooting object is located in the shooting interface belongs to the set object position; the second time is later than the first time; when the system time reaches a third moment, a third vibration prompt with a third vibration effect can be output through the terminal equipment; the third vibration effect is used for representing the number of at least two shooting objects as the number of set objects; the third time is later than the second time.
It can be understood that the present application may configure different vibration effect parameters for different numbers of objects, for example, for a number of 0 to 9, characterization may be performed by two short vibrations; for numbers greater than 9, characterization can be by one long vibration (or two long vibrations). After outputting the first vibration alert for the type of the photographic subject, a second vibration alert for the position of the photographic subject may be output after a certain time interval (which may be separated by a longer time, such as 200ms, 300ms, etc., and may be referred to as a first interval duration), and then after a certain time interval (which may be the same as the first interval duration), a third vibration alert for the number of the photographic subjects may be continuously output. Therefore, the type, the position and the number of the objects of a certain shooting object type can be presented by vibration front and back. It should be noted that, the output sequence of the vibration alert for the type of the object to be photographed, the position of the object to be photographed, and the number of the objects to be photographed may be a sequence of first photographing the type of the object to be photographed, then photographing the position of the object to be photographed, and then photographing the type of the object to be photographed, and then photographing the number of the objects to be photographed.
Optionally, it can be seen from the above that, when the vibration reminding is performed on the shooting object, the vibration reminding with different vibration effects can be performed according to different object types, and the vibration reminding with different vibration effects can also be performed according to different object positions, and when the shooting interface includes at least two shooting objects and the object types to which the at least two shooting objects belong are the same, the vibration reminding can be performed according to the number of the objects. In addition, when the shooting interface includes shooting objects of different object types, the method and the device can perform different vibration reminding on the shooting objects of different object types, for example, when the shooting interface includes shooting objects of an animal type and shooting objects of a plant type, vibration reminding can be performed on the shooting objects of which the object types are the animal types (specifically, the method and the device can include vibration reminding for the types of the animal types, vibration reminding for the positions of the objects of the shooting objects of the animal types, and vibration reminding for the number of the shooting objects belonging to the animal types); then, after a certain time interval (which may be a longer time period), the vibration alert may be performed on the photographic subject whose object type is the plant type (specifically, the vibration alert may include a type vibration alert for the plant type, a position vibration alert for the object position of the photographic subject of the plant type, and a number vibration alert of the number of photographic subjects belonging to the plant type).
That is, taking an example that the photographic subject includes a first photographic subject and a second photographic subject, and the type of the photographic subject to which the first photographic subject belongs is different from the type of the photographic subject to which the second photographic subject belongs, the vibration alert for the photographic subject may include a vibration alert for the first photographic subject (which may be referred to as a first subject vibration alert) and a vibration alert for the second photographic subject (which may be referred to as a second subject vibration alert); the first object vibration reminder can also comprise a vibration reminder (which can be called as a first sub vibration reminder for convenient distinction) aiming at the object type of the first shooting object and a vibration reminder (which can be called as a second sub vibration reminder for convenient distinction; and in a feasible case, a fifth sub vibration reminder aiming at the object number of the first shooting object); the second object vibration alert may further include a vibration alert for the object type of the second photographic object (which may be referred to as a third sub vibration alert for ease of distinction) and a vibration alert for the object location (which may be referred to as a fourth sub vibration alert for ease of distinction; and in a possible case, may further include a sixth sub vibration alert for the number of objects of the second photographic object). At this time, the specific implementation manner of outputting the focusing success vibration prompt for the shooting object through the terminal device may be as follows: when the system time reaches a first sub-moment, outputting a first sub-vibration prompt with a first sub-vibration effect through the terminal equipment; the first sub-vibration effect is used for representing that the shooting object type to which the first shooting object belongs to a first set object type; when the system time reaches a second sub-moment, outputting a second sub-vibration prompt with a second sub-vibration effect through the terminal equipment; the second vibration prompt is used for representing that the shooting object position of the first shooting object in the shooting interface belongs to a first set object position; the second sub-moment is later than the first sub-moment; when the system time reaches a third sub-moment, outputting a third sub-vibration prompt with a third sub-vibration effect through the terminal equipment; the third sub-vibration effect is used for representing that the shooting object type of the second shooting object belongs to a second set object type; the third sub-time is later than the second sub-time; when the system time reaches a fourth sub-moment, outputting a fourth sub-vibration prompt with a fourth sub-vibration effect through the terminal equipment; the fourth sub-vibration effect is used for representing that the position of a second shooting object in the shooting interface belongs to the position of a second set object; the fourth sub-time is later than the third sub-time; the time difference between the fourth sub-instant and the third sub-instant is the same as the time difference between the second sub-instant and the first sub-instant.
It can be understood that, at the first sub-time, a vibration alert may be first performed on the object type (e.g., the first set object type) of the first photographic object (e.g., the first sub-vibration alert is output), and then, after a certain time interval (which may be a shorter time, and may be referred to as the first interval sub-time period), a vibration alert may be performed on the object position (e.g., the first set object position) of the first photographic object (e.g., the second sub-vibration alert is output); at this time, it may be considered that the vibration alert for a certain object type has ended, after a certain time interval (which may be a longer time herein and may be referred to as a second interval sub-period, i.e., the second interval sub-period may be longer than the first interval sub-period), the vibration alert may be performed on the object type (e.g., the second set object type) of the second photographic object (e.g., the third sub-vibration alert is output), and then, after a certain time interval (which may be the same as the first interval sub-period), the vibration alert may be performed on the object position (e.g., the second set object position) of the second photographic object (e.g., the fourth sub-vibration alert is output). Therefore, the corresponding focusing success vibration reminding can be performed on the shot objects of different object types in a differentiated and reasonable manner.
According to the method and the device, the focusing success vibration prompt aiming at the shooting object can be output under the condition that the shooting object is successfully focused, and the focusing failure vibration prompt aiming at the shooting object can be output under the condition that the shooting object is failed to be focused. That is, if the subject fails to focus, a focus failure vibration alert of the subject may be output. Correspondingly, the method and the terminal equipment can also configure a vibration effect parameter for the focusing failure state, and when some shooting object fails to focus, the terminal equipment can carry out vibration reminding according to the vibration effect parameter.
It should be noted that the vibration reminding method provided by the embodiment of the present application may also be applied to other shooting requirements in a shooting service, for example, if there is a requirement for brightness adjustment in the shooting service, when the brightness is adjusted to be reasonable, a vibration reminding with reasonable brightness adjustment may be output; correspondingly, when the brightness adjustment is not reasonable, a vibration prompt of abnormal brightness adjustment can be output. For another example, in a shooting service, there is a demand for adjusting a flash lamp, and then under the condition that the flash lamp is turned on, a vibration prompt for turning on the flash lamp can be output; correspondingly, under the condition that the flash lamp is not started, a vibration prompt aiming at the non-starting of the flash lamp can be output. That is to say, the vibration reminding method provided by the present application is not limited to the focusing prompt of the shooting service, but can also be applied to other auxiliary shooting functions in the shooting service. The applied scene is not limited to the vibration reminding mode provided by the application.
It can be seen from the above description that, when the photographic subject is successfully focused, a vibration reminder for successful focusing of the photographic subject can be output according to the type of the photographic subject and/or the position of the photographic subject to which the photographic subject belongs. The method and the device for reminding the shooting object through the terminal device can output the vibration reminding of successful focusing aiming at the shooting object through the terminal device, the terminal device can output the vibration reminding of successful focusing aiming at the shooting object on the device of the terminal device, and the terminal device can also output the vibration reminding of successful focusing aiming at the shooting object on other associated devices. The associated device may refer to a device having an association relationship with the terminal device, and exemplarily, the associated device may include a bracelet device, a shooting remote control device, and the like.
In the embodiment of the application, in a shooting interface for executing a shooting service, when the shooting interface contains a shooting object, focusing processing can be performed on the shooting object, and if the shooting object is focused successfully, a vibration prompt for focusing success of the shooting object can be output according to the type of the shooting object and/or the position of the shooting object to which the shooting object belongs. It should be understood that the auxiliary function of vibration reminding can be added in the focusing process of the shooting service, when the focusing is successful, the vibration reminding of successful focusing can be output, the focusing result in shooting can be clearly and clearly fed back in time, and therefore a non-defocused and clear shot image (shot picture) can be well obtained under the condition of successful focusing, and the quality of the shot image can be improved. In conclusion, the method and the device can perform focusing feedback through vibration lifting in the shooting service, so that the picture quality of the shot picture can be improved.
Optionally, according to the above description, when the shooting object is successfully focused, the vibration reminding with different vibration effects can be performed on the shooting objects with different object types, the vibration reminding with different vibration effects can be performed on the shooting objects at different object positions, and the vibration reminding with different vibration effects can be performed on the shooting objects with different object numbers. In addition, the method and the device can also carry out vibration reminding on other object information related to the shooting object in the shooting service, for example, the method and the device can carry out vibration reminding according to the shooting presentation effect of the shooting object in the shooting interface, for example, when the area proportion of the shooting object in the shooting interface is too large or too small, the occupation ratio of the shooting object is unbalanced, the shooting presentation effect is not increased, and the situation that the quality of the shot image is poor is easily caused. For convenience of understanding, please refer to fig. 4 together, and fig. 4 is a schematic flowchart illustrating a vibration reminding process for a shooting presentation effect according to an embodiment of the present application. The method may be performed by a terminal device (for example, any terminal device in the terminal device cluster 100 shown in fig. 1) or a service server (for example, the service server 1000 shown in fig. 1), or may be performed by both the terminal device and the service server. For ease of understanding, the present embodiment is described as an example in which the method is executed by the terminal device described above. Wherein, the data processing method at least comprises the following steps S401 to S406:
in step S401, a subject area where a subject is located is acquired in a shooting interface.
Specifically, the object region may be a region formed by opposite side boundaries of the photographic object in the photographic interface, and the object boundary of the photographic object in the photographic interface may be acquired first, and then the region formed by the object boundary may be determined as the object region of the photographic object. It should be appreciated that the area of the subject region may be less than or equal to the interface area of the capture interface.
Step S402, acquiring the area corresponding to the object area and the interface area corresponding to the shooting interface.
Specifically, after the target area of the photographic subject is determined, the area occupied by the target area (which may be referred to as an area) may be determined, and the area occupied by the photographic interface (which may be referred to as an interface area) may also be determined.
And step S403, determining the shooting presentation effect of the shooting object according to the area of the region and the area of the interface.
Specifically, the shooting presentation effect of the shot object can be determined according to the area of the region and the area of the interface. The specific implementation manner for determining the shooting presentation effect of the shooting object according to the area of the region and the area of the interface may be as follows: an area ratio between the area of the region and the area of the interface may be determined; if the area ratio is larger than the ratio threshold, determining the shooting presenting effect of the shooting object as an abnormal presenting effect; and if the area ratio is smaller than the ratio threshold, the shooting presenting effect of the shooting object can be determined as a reasonable presenting effect.
It should be understood that, in the present application, two ratio thresholds (the two ratio thresholds are different in size) may be set, where the ratio threshold may be referred to as a first ratio threshold (the first ratio threshold is greater than a second ratio threshold), and when the area ratio is greater than the first ratio threshold, it may be considered that the proportion of the photographic object is too large, and at this time, it may be considered that the photographic rendering effect of the photographic object is not good (abnormal rendering effect); when the area ratio is smaller than the first ratio threshold, the area ratio may be further compared with a second ratio threshold, and if the area ratio is smaller than the second ratio threshold, it may be determined that the proportion of the photographic object is too small, and it may be determined that the photographic presentation effect of the photographic object is not good; if the area ratio is greater than the second ratio threshold, it can be considered that the shooting presentation effect of the shooting object is more reasonable. Certainly, in a feasible case, when shooting is performed, if the occupation ratio of the shooting object is too small, the shooting object is located at a position far away from the camera, and under a specific shooting requirement, the occupation ratio of the shooting object is a smaller requirement, then only one ratio threshold (for example, a first ratio threshold) may be set in the present application, and when the area ratio is smaller than the first ratio threshold, the shooting presentation effect of the shooting object may be directly determined as a reasonable presentation effect.
The present application merely illustrates an example of a determination method for determining whether the shooting presentation effect of the shooting object is reasonable, but is not limited thereto, for example, whether the shooting presentation effect is reasonable may also be determined by other object features of the shooting object, for example, whether the shooting presentation effect of the shooting object is reasonable may also be determined by features of a shooting action, a shooting expression, and the like of the shooting object, and a specific manner for determining the shooting presentation effect will not be limited herein.
And step S404, determining whether the shooting presentation effect is reasonable.
Specifically, the shooting presentation effect can be specifically divided into a reasonable presentation effect and an abnormal presentation effect, if the shooting presentation effect is reasonable, the shooting presentation effect can be the reasonable presentation effect, and then the subsequent step S405 can be executed; if the shooting presenting effect is not reasonable, the shooting presenting effect may be an abnormal presenting effect, and then the following step S406 may be executed.
And S405, if the shooting presenting effect is a reasonable presenting effect, outputting a reasonable vibration prompt.
Specifically, the corresponding reasonable vibration effect parameter can be configured for the reasonable presenting effect, and when the shooting presenting effect of the shooting object is the reasonable presenting effect, the terminal device can output the vibration prompt with the corresponding vibration effect according to the reasonable vibration effect parameter (for convenience of distinguishing, the vibration prompt is called as the reasonable vibration prompt).
In step S406, if the shooting presenting effect is an abnormal presenting effect, an abnormal vibration alert is output.
Specifically, the abnormal vibration effect parameter corresponding to the abnormal presenting effect may be configured for the abnormal presenting effect, and when the shooting presenting effect of the shooting object is the abnormal presenting effect, the terminal device may output a vibration alert having a corresponding vibration effect according to the abnormal vibration effect parameter (for convenience of distinction, this is referred to as an abnormal vibration alert). Reminding through abnormal vibration, can in time indicate the shooting object to shoot the adjustment: for example, the shooting scale, the shooting expression, the shooting motion, and the like can be adjusted in time.
In the embodiment of the application, the auxiliary function of vibration reminding can be added in the focusing process of the shooting service, the vibration reminding of successful focusing can be output when the focusing is successful, the focusing result in the shooting process can be clearly and clearly fed back in time, and therefore a non-defocused and clear shot image (shot picture) can be well obtained under the condition of successful focusing, and the quality of the shot image can be improved. In conclusion, the method and the device can perform focusing feedback through vibration lifting in the shooting service, so that the picture quality of the shot picture can be improved.
Optionally, according to the above description, when the shooting object is successfully focused, the vibration reminding with different vibration effects can be performed on the shooting objects with different object types, the vibration reminding with different vibration effects can be performed on the shooting objects at different object positions, and the vibration reminding with different vibration effects can be performed on the shooting objects with different object numbers. In addition, the method and the device can also carry out vibration reminding on other object information related to the shooting object in the shooting service, for example, the method and the device can carry out vibration reminding according to the shooting presenting effect of the shooting object in the shooting interface. In addition, when the vibration prompt is output, the analog audio aiming at the object type can be synchronously output according to the object type to which the shooting object belongs. For example, the present application may identify a type of a photographic subject to which the photographic subject belongs, may further identify what type of subject sub-type (e.g., a tiger type, a monkey type, a puppy type, a lion type, etc. may be included in the specific subject type in the case where the type of the photographic subject is a specific subject type (e.g., an animal type), and may then generate subject simulation audio for the photographic subject based on the subject sub-type. And simultaneously with the vibration reminding, the object simulation audio aiming at the shooting object can be synchronously output. For ease of understanding, please refer to fig. 5, fig. 5 is a schematic flow chart illustrating a process of outputting a vibration alert and an audio alert simultaneously according to an embodiment of the present application. As shown in fig. 5, the process may include at least the following steps S501 to S503:
in step S501, a photographic subject type to which a photographic subject belongs is acquired.
Specifically, the type of the photographic subject may be any type for describing the attribute characteristics of the subject, and may specifically include, for example, a person type, an animal type, a plant type, a building type, and the like.
Step S502 generates an object simulation audio for the photographic subject according to the photographic subject type.
Specifically, different analog audios can be generated for different shooting object types, and when the shooting object type of the shooting object is identified, corresponding analog audios can be generated based on the shooting object. For example, for a type of character, simulated audio for characterizing the character (e.g., simulated audio for laughter of a child) may be generated, for a type of animal, simulated audio for characterizing the animal (e.g., simulated audio for generating a frog's cry, simulated audio for a puppy's cry, etc.), and for a type of vegetation, simulated audio for characterizing the vegetation (e.g., simulated audio for a water-stream). That is, different analog audios may be generated for different photographic subject types.
Of course, in a possible embodiment, when the type of the object is identified, it is possible to further identify what object sub-type the object belongs to, and generate more targeted object simulation audio according to the object sub-type. For example, the character types may include an adult subtype and a child subtype, the animal types may include a tiger subtype, a mouse subtype, a chicken subtype and the like, the plant types may include a flower type, a tree type and the like, and the object subtypes of different object types are correspondingly different, and will not be illustrated one by one here.
In step S503, when the focusing success vibration alert for the photographic subject is output, the subject analog audio is synchronously output.
Specifically, after generating the object simulation audio for the photographic subject, when outputting the focus success vibration alert for the photographic subject, the object simulation audio for the photographic subject may be synchronously output. Of course, in a possible embodiment, for the object analog audio, the object analog audio may also be output after outputting the focusing success vibration alert, that is, the output may not be synchronous, and the application will not limit this.
In the embodiment of the application, the auxiliary function of vibration reminding can be added in the focusing process of the shooting service, the vibration reminding of successful focusing can be output when the focusing is successful, and the focusing result in the shooting process can be clearly, clearly and timely fed back, so that a non-defocused and clear shot image (shot picture) can be well obtained under the condition of successful focusing, and the quality of the shot image can be improved. In addition, when vibration reminding is carried out, the reminding mode of analog audio is added, the type of the current shooting object can be prompted in time with interest, and shooting interest can be increased.
Optionally, according to the above description, when the shooting object is successfully focused, the vibration reminding with different vibration effects can be performed on the shooting objects with different object types, the vibration reminding with different vibration effects can be performed on the shooting objects at different object positions, and the vibration reminding with different vibration effects can be performed on the shooting objects with different object numbers. In addition, the method and the device can also carry out vibration reminding on other object information related to the shooting object in the shooting service, for example, the method and the device can carry out vibration reminding according to the shooting presenting effect of the shooting object in the shooting interface. In addition, when the vibration reminding is output, the current emotion of the shooting object can be predicted according to partial feature data presented by the shooting object, and corresponding vibration reminding is carried out based on the current emotion. For ease of understanding, please refer to fig. 6 together, and fig. 6 is a schematic flow chart illustrating a process of outputting a vibratory alert and an emotional alert synchronously according to an embodiment of the present application. As shown in fig. 6, the flow may include at least the following steps S601 to S604:
in step S601, a photographic subject type to which a photographic subject belongs is acquired.
Specifically, the type of the photographic subject may be any type for describing the attribute characteristics of the subject, and may specifically include, for example, a person type, an animal type, a plant type, a building type, and the like.
Step S602, if the shooting object type is the specified object type, key part data of the shooting object in the shooting interface is obtained.
Specifically, the specified object type may be one or more of the types described above for describing the object attribute characteristics, for example, the specified object type may specifically include a person type and an animal type. If the type of the shooting object is a human type or an animal type, the terminal device can acquire key position data of the shooting object in a shooting interface. The key portion data may refer to data of a certain key portion of the photographic subject, and the key portion may specifically be any one of the portions of the photographic subject, for example, the key portion may be a face, a hand, a leg, and the like.
And step S603, inputting the key part data into the recognition model, and determining the predicted emotion of the shooting object through the recognition model and the key part data.
Specifically, the key part data can be input into the recognition model, wherein the recognition model can be a model (specifically, any artificial intelligence model with an emotion prediction function) for emotion prediction obtained after model training is performed in a machine learning manner, the recognition model can have the ability of accurately predicting emotion through model training, and the predicted emotion of the shooting object can be determined through the recognition model and the key part data. For example, when the key data is face data, the face data is input to a recognition model, and then the current expression of the photographic subject is recognized by the recognition model, and based on the current expression, the current emotion of the photographic subject can be predicted (for example, if the current expression is a difficult expression, the current emotion of the photographic subject can be a difficult emotion).
And step S604, if the predicted emotion is the specified emotion, synchronously outputting the emotion vibration prompt aiming at the predicted emotion when outputting the focusing success vibration prompt aiming at the shooting object.
Specifically, the specified emotion may include a negative emotion (or referred to as a negative emotion), for example, the specified emotion may specifically include a too-hard emotion, a angry emotion, and the like, a corresponding vibration effect parameter (which may be referred to as an emotional vibration effect parameter) may be configured for the specified emotion, and when the predicted emotion is the specified emotion, the terminal device may further output the emotional vibration reminder corresponding to the predicted emotion synchronously according to the emotional vibration effect parameter when outputting the focusing success vibration reminder for the shooting object. Of course, in a possible embodiment, the emotional vibration alert may be output after the focusing success vibration alert is output, that is, the output may not be synchronous, and the application will not limit the emotional vibration alert. It should be understood that by outputting the vibration prompt for the predicted emotion, the emotion of the photographic subject in the photographic interface can be prompted in real time (the motion and expression of the photographic subject are reflected indirectly), and then the photographic subject can adjust the key position data in time based on the emotion vibration prompt, so that the quality of the photographed image is further improved.
In the embodiment of the application, the auxiliary function of vibration reminding can be added in the focusing process of the shooting service, the vibration reminding of successful focusing can be output when the focusing is successful, and the focusing result in the shooting process can be clearly, clearly and timely fed back, so that a non-defocused and clear shot image (shot picture) can be well obtained under the condition of successful focusing, and the quality of the shot image can be improved. In addition, when vibration is reminded, through adding the vibration mode of reminding to the mood, can also in time point out the current mood of current shooting object interestingly, can increase the interest of shooing, guide the shooting object in time to adjust key position data simultaneously to can further promote the quality of shooing the image.
Further, please refer to fig. 7, fig. 7 is a flowchart of a system according to an embodiment of the present disclosure. As shown in fig. 7, the system flow may include at least the following steps S71 to S80:
in step S71, the photographer starts a shooting application through the terminal device.
Specifically, the shooting application herein may refer to a camera application, and the shooting application may be integrated with the vibration SDK. When the photographer starts the shooting application, the camera can be started, the terminal device can start the image sensor for collecting images at the moment, and when the photographer selects the shooting mode, shooting can be carried out (a shooting interface for executing shooting service is entered).
In step S72, the terminal device captures a picture and passes it to the shooting application.
Specifically, the image sensor of the terminal device may capture a picture in the shooting interface, and the terminal device may transmit the captured picture to the shooting application.
In step S73, the shooting application determines whether focusing is successful.
Specifically, after the shot picture is transmitted to the shooting application, the shooting application can judge whether the shot object is focused successfully. Meanwhile, the shooting application can also identify object features such as the type of the shooting object, the position of the shooting object, the number of the shooting objects and the like to which the shooting object belongs in the picture.
The method comprises the steps that a Surface object can be set in shooting application to preview a shooting picture in real time, and the preview picture of the shot image can be displayed on a shooting interface through a Surface View function; the Surface view function may clarify the state of the Surface in real time through the Surface holder (the state may refer to whether there is a focusing event triggered, that is, a focusing state). The detection mode of the photographic subject may be detected by a CameraInfo method (for example, a key part of a character type may be detected), specifically, the CameraInfo method includes an abstract interface FaceDetectionListenr, and when the detection of the key part is started, the abstract interface may be a callback, so as to assist in realizing focusing of the photographic subject.
It should be noted that the shooting application may be integrated or deployed in the terminal device, so that the implementation process of the shooting application may be actually understood as an execution process of the terminal device.
In step S74, the shooting application determines vibration parameters according to the focusing condition.
Specifically, the shooting application may acquire vibration parameters such as vibration effect parameters corresponding to the types of the shooting objects, vibration effect parameters corresponding to the positions of the shooting objects, and vibration effect parameters corresponding to the number of the shooting objects, in the case that focusing is successful. In the case of focusing failure, a vibration effect parameter corresponding to the focusing failure may also be acquired.
Step S75, the shooting application returns the vibration parameters to the terminal device.
And step S76, the terminal equipment outputs vibration prompt according to the vibration parameters.
Specifically, the vibration alert is perceptible to the photographer. The photographer can perform corresponding shooting adjustment based on the vibration prompt. For example, the shooting angle can be adjusted in time to focus accurately until focusing is successful, and when focusing is successful, a photographer can shoot a shot image based on vibration prompt of focusing success.
In step S77, the photographer transmits a photographing signal to the terminal device.
Specifically, when focusing is successful, the photographer may send a shooting signal to the terminal device.
In step S78, the terminal device transmits a shooting signal to the shooting application.
In step S79, the shooting application saves the shot image to the local.
And step S80, uploading the shot image to a background server by the shooting application.
Specifically, the backend server herein may refer to an application backend server corresponding to the shooting application, which may correspond to the service server 1000 in the embodiment corresponding to fig. 1 described above, and may be configured to store service data generated in the shooting application (for example, a shot image, a mapping relationship between different object types and vibration parameters, and the like).
For a specific implementation manner of steps S71 to S80, reference may be made to the description in the embodiment corresponding to fig. 3 to fig. 6, and details will not be repeated here.
In the embodiment of the application, the auxiliary function of vibration reminding can be added in the focusing process of the shooting service, the vibration reminding of successful focusing can be output when the focusing is successful, the focusing result in the shooting process can be clearly and clearly fed back in time, and therefore a non-defocused and clear shot image (shot picture) can be well obtained under the condition of successful focusing, and the quality of the shot image can be improved.
Further, please refer to fig. 8, wherein fig. 8 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. The data processing means may be a computer program (comprising program code) running on a computer device, for example the data processing means being an application software; the data processing apparatus may be adapted to perform the method illustrated in fig. 3. As shown in fig. 8, the data processing apparatus 1 may include: the device comprises an interface display module 11, a focusing module 12 and a vibration reminding module 13.
An interface display module 11, configured to display a shooting interface for executing a shooting service;
the focusing module 12 is configured to perform focusing processing on a shooting object when the shooting interface includes the shooting object;
the vibration reminding module 13 is used for outputting a vibration reminding for successful focusing of the shot object according to the type of the shot object and/or the position of the shot object to which the shot object belongs if the shot object is successfully focused; the terminal equipment is equipment for displaying a shooting interface.
For specific implementation manners of the interface display module 11, the focusing module 12 and the vibration reminding module 13, reference may be made to the descriptions of step S101 to step S103 in the embodiment corresponding to fig. 3, which will not be described herein again.
In one embodiment, the focus success vibration alert comprises a first vibration alert and a second vibration alert;
the vibration reminding module 13 may include: an object information acquisition unit 131 and a vibration output unit 132.
An object information acquiring unit 131 configured to acquire a type of a photographic subject to which the photographic subject belongs and a position of the photographic subject at which the photographic subject is located in a photographic interface;
the vibration output unit 132 is used for outputting a first vibration prompt with a first vibration effect through the terminal equipment when the system time reaches a first moment; the first vibration effect is used for representing that the shooting object type to which the shooting object belongs to a set object type;
the vibration output unit 132 is further configured to output, by the terminal device, a second vibration alert with a second vibration effect when the system time reaches a second time; the second vibration effect is used for representing that the shooting object position of the shooting object in the shooting interface belongs to the set object position; the second time is later than the first time.
For specific implementation of the object information obtaining unit 131 and the vibration output unit 132, reference may be made to the description of step S103 in the embodiment corresponding to fig. 3, which will not be repeated herein.
In one embodiment, the vibration output unit 132 may include: a type table acquisition subunit 1321, a target parameter determination subunit 1322, and a vibration output subunit 1323.
A type table obtaining subunit 1321, configured to obtain a type parameter mapping table when the system time reaches the first time; the type parameter mapping table comprises a mapping relation between a configuration object type set and a configuration vibration effect parameter set, and a mapping relation exists between one configuration object type in the configuration object type set and one configuration vibration effect parameter in the configuration vibration effect parameter set;
a target parameter determining subunit 1322 is configured to determine, as the first target configuration object type, a configuration object type that is the same as the shooting object type in the configuration object type set;
the target parameter determining subunit 1322 is further configured to determine, as the first target vibration effect parameter corresponding to the type of the photographic object, the configuration vibration effect parameter that has a mapping relationship with the first target configuration object type in the configuration vibration effect parameter set;
a vibration output subunit 1323 configured to determine the vibration effect indicated by the first target vibration effect parameter as the first vibration effect;
the vibration output subunit 1323 is further configured to output, in the terminal device, the first vibration alert according to the first target vibration effect parameter.
For specific implementation manners of the type table obtaining subunit 1321, the target parameter determining subunit 1322 and the vibration output subunit 1323, reference may be made to the description of step S103 in the embodiment corresponding to fig. 3, and details will not be described here.
In one embodiment, the first target vibration effect parameter includes N vibration intensity parameters, and a vibration order corresponding to the N vibration intensity parameters;
the vibration output subunit 1323 is further specifically configured to sort the N vibration intensity parameters according to the vibration order corresponding to the N vibration intensity parameters, so as to obtain an intensity parameter sequence;
the vibration output subunit 1323 is further specifically configured to, in the terminal device, perform vibration reminding sequentially according to the vibration intensity parameters in the intensity parameter sequence.
In one embodiment, the vibration output unit 132 may include: an initial table acquisition sub-unit 1324, a parameter acquisition sub-unit 1325 to be replaced, a parameter replacement sub-unit 1326, and a type table determination sub-unit 1327.
An initial table obtaining subunit 1324, configured to obtain an initial type parameter mapping table; the initial type parameter mapping table comprises a mapping relation between a configuration object type set and an initial configuration vibration effect parameter set, and a mapping relation exists between one configuration object type in the configuration object type set and one initial configuration vibration effect parameter in the initial configuration vibration effect parameter set;
a parameter to be replaced acquiring subunit 1325, configured to acquire, when receiving parameter modification information for a second target configuration object type in the configuration object type set sent by the terminal device, a second target vibration effect parameter having a mapping relationship with the second target configuration object type in the initial configuration vibration effect parameter set; the parameter modification information comprises a modified vibration effect parameter of the second target configuration object type;
a parameter replacing subunit 1326, configured to replace the second target vibration effect parameter in the initially configured vibration effect parameter set with the modified vibration effect parameter, so as to obtain a configured vibration effect parameter set;
a type table determining subunit 1327, configured to determine, as a type parameter mapping table, an initial type parameter mapping table that includes a mapping relationship between the set of configuration object types and the set of configuration vibration effect parameters.
For a specific implementation manner of the initial table obtaining subunit 1324, the parameter obtaining subunit 1325 to be replaced, the parameter replacing subunit 1326, and the type table determining subunit 1327, reference may be made to the description of step S103 in the embodiment corresponding to fig. 3, which will not be described again here.
In an embodiment, the vibration output unit 132 is further specifically configured to obtain a position parameter mapping table when the system time reaches the second time; the position parameter mapping table comprises a mapping relation between a configuration object position set and a configuration position vibration parameter set, and a mapping relation exists between one configuration object position in the configuration object position set and one configuration position vibration parameter in the configuration position vibration parameter set;
the vibration output unit 132 is further specifically configured to determine, as a target configuration object position, a configuration object position in the configuration object position set that is the same as the shooting object position;
the vibration output unit 132 is further specifically configured to determine, as a target position vibration parameter corresponding to the position of the shooting object, a configuration position vibration parameter in the configuration position vibration parameter set, which has a mapping relationship with the target configuration object position;
the vibration output unit 132 is further specifically configured to determine the vibration effect indicated by the target location vibration parameter as a second vibration effect, and output a second vibration alert in the terminal device according to the target location vibration parameter.
In one embodiment, the number of the photographic subjects is at least two, and the types of the photographic subjects to which each photographic subject belongs are the same; the focusing success vibration prompt comprises a first vibration prompt, a second vibration prompt and a third vibration prompt;
the vibration reminding module 13 may include: an object related information acquisition unit 133 and a vibration alert unit 134.
An object association information acquisition unit 133 for acquiring a photographic subject type to which a photographic subject belongs and a photographic subject position at which the photographic subject is located in a photographic interface;
the vibration reminding unit 134 is used for outputting a first vibration reminding with a first vibration effect through the terminal equipment when the system time reaches a first moment; the first vibration effect is used for representing that the shooting object type to which the shooting object belongs to a set object type;
the vibration reminding unit 134 is further configured to output a second vibration reminder with a second vibration effect through the terminal device when the system time reaches a second moment; the second vibration prompt is used for representing that the position of the shooting object in the shooting interface belongs to the set object position; the second time is later than the first time;
the vibration reminding unit 134 is further configured to output a third vibration reminder with a third vibration effect through the terminal device when the system time reaches a third moment; the third vibration effect is used for representing the number of at least two shooting objects as the number of set objects; the third time is later than the second time.
For specific implementation manners of the object association information obtaining unit 133 and the vibration reminding unit 134, reference may be made to the description of step S103 in the embodiment corresponding to fig. 3, and details are not repeated here.
In one embodiment, the photographic subject includes a first photographic subject and a second photographic subject, the photographic subject type to which the first photographic subject belongs is different from the photographic subject type to which the second photographic subject belongs; the focusing success vibration reminding for the shooting object comprises a first object vibration reminding for a first shooting object and a second object vibration reminding for a second shooting object; the first object vibration reminder comprises a first sub vibration reminder and a second sub vibration reminder; the second object vibration prompt comprises a third sub vibration prompt and a fourth sub vibration prompt;
the vibration reminding module 13 may include: a first object reminder unit 135 and a second object reminder unit 136.
A first object reminding unit 135, configured to output, by the terminal device, a first sub vibration reminder with a first sub vibration effect when the system time reaches a first sub time; the first sub-vibration effect is used for representing that the shooting object type to which the first shooting object belongs to a first set object type;
the first object reminding unit 135 is further configured to output, by the terminal device, a second sub-vibration reminder with a second sub-vibration effect when the system time reaches a second sub-time; the second vibration prompt is used for representing that the shooting object position of the first shooting object in the shooting interface belongs to a first set object position; the second sub-time is later than the first sub-time;
the second object reminding unit 136 is configured to output, by the terminal device, a third sub-vibration reminder with a third sub-vibration effect when the system time reaches a third sub-time; the third sub-vibration effect is used for representing that the shooting object type of the second shooting object belongs to a second set object type; the third sub-time is later than the second sub-time;
the second object reminding unit 136 is further configured to output, by the terminal device, a fourth sub-vibration reminder with a fourth sub-vibration effect when the system time reaches a fourth sub-time; the fourth sub-vibration effect is used for representing that the position of a second shooting object in the shooting interface belongs to the position of a second set object; the fourth sub-time is later than the third sub-time; the time difference between the fourth sub-instant and the third sub-instant is the same as the time difference between the second sub-instant and the first sub-instant.
For a specific implementation manner of the first object reminding unit 135 and the second object reminding unit 136, reference may be made to the description of step S103 in the embodiment corresponding to fig. 3, which will not be described herein again.
In one embodiment, the data processing apparatus 1 may further include: the area acquisition module 14, the area acquisition module 15, the presentation effect determination module 16, and the presentation effect reminding module 17.
The area acquisition module 14 is used for acquiring an object area where a shooting object is located in a shooting interface;
the area acquisition module 15 is configured to acquire an area corresponding to the object area and an interface area corresponding to the shooting interface;
the presentation effect determining module 16 is configured to determine a shooting presentation effect of the shooting object according to the area of the region and the area of the interface;
the presentation effect reminding module 17 is used for outputting reasonable vibration reminding if the shooting presentation effect is a reasonable presentation effect;
and the presentation effect reminding module 17 is further configured to output an abnormal vibration reminder if the shooting presentation effect is an abnormal presentation effect.
For specific implementation manners of the region obtaining module 14, the area obtaining module 15, the presentation effect determining module 16, and the presentation effect reminding module 17, reference may be made to the descriptions of step S401 to step S405 in the embodiment corresponding to fig. 4, and details will not be described here.
In one embodiment, the presentation effect determination module 16 may include: a ratio determining unit 161, a first effect determining unit 162, and a second effect determining unit 163.
A ratio determining unit 161 for determining an area ratio between the area of the region and the area of the interface;
a first effect determining unit 162, configured to determine a shooting presenting effect of the shooting object as an abnormal presenting effect if the area ratio is greater than the ratio threshold;
a second effect determination unit 163 configured to determine the shooting presentation effect of the shooting object as a reasonable presentation effect if the area ratio is smaller than the ratio threshold.
For specific implementation of the ratio determining unit 161, the first effect determining unit 162, and the second effect determining unit 163, reference may be made to the description of step S403 in the embodiment corresponding to fig. 4, which will not be described herein again.
In one embodiment, the data processing apparatus 1 may further include: failure alert module 18.
And the failure reminding module 18 is used for outputting a focusing failure vibration reminding aiming at the shooting object if the shooting object fails to focus.
For a specific implementation manner of the failure reminding module 18, reference may be made to the description of step S103 in the embodiment corresponding to fig. 3, which will not be described herein again.
In one embodiment, the data processing apparatus 1 may further include: a type acquisition module 19, an analog audio generation module 21 and an audio output module 22.
A type acquisition module 19 configured to acquire a type of a photographic subject to which the photographic subject belongs;
a simulated audio generating module 21 for generating an object simulated audio for the photographic subject according to the photographic subject type;
and the audio output module 22 is used for synchronously outputting the object simulation audio when outputting the focusing success vibration prompt aiming at the shooting object.
For specific implementation manners of the type obtaining module 19, the analog audio generating module 21, and the audio output module 22, reference may be made to the description of step S501 to step S503 in the embodiment corresponding to fig. 5, which will not be described herein again.
In one embodiment, the data processing apparatus 1 may further include: an object type acquisition module 23, a key part acquisition module 24, an emotion prediction module 25, and an emotion reminding module 26.
An object type acquiring module 23, configured to acquire a photographic subject type to which a photographic subject belongs;
a key part obtaining module 24, configured to obtain key part data of the photographic object in the photographic interface if the photographic object type is the specified object type;
the emotion prediction module 25 is used for inputting the key part data into the recognition model and determining the predicted emotion of the shooting object through the recognition model and the key part data;
and the emotion reminding module 26 is used for synchronously outputting the emotion vibration reminding aiming at the predicted emotion when the focusing success vibration reminding aiming at the shooting object is output if the predicted emotion is the specified emotion.
For specific implementation manners of the object type obtaining module 23, the key part obtaining module 24, the emotion predicting module 25, and the emotion reminding module 26, reference may be made to the descriptions of step S601 to step S604 in the embodiment corresponding to fig. 6, and details will not be described here.
In the embodiment of the application, in a shooting interface for executing a shooting service, when the shooting interface contains a shooting object, focusing processing can be performed on the shooting object, and if the shooting object is focused successfully, a vibration prompt for focusing success of the shooting object can be output according to the type of the shooting object and/or the position of the shooting object to which the shooting object belongs. It should be understood that, in the focusing process of the shooting service, the auxiliary function of vibration reminding can be added, when focusing is successful, the vibration reminding of successful focusing can be output, and the focusing result (type of the shooting object and/or position of the shooting object) during shooting can be clearly, definitely and timely fed back, so that a non-defocused and clear shot image (shooting picture) can be well obtained under the condition of successful focusing, and the quality of the shot image can be improved. In conclusion, the method and the device can perform focusing feedback through vibration extraction in a shooting service, so that the picture quality of a shot picture can be improved.
Further, please refer to fig. 9, wherein fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 9, the data processing apparatus 1 in the embodiment corresponding to fig. 8 may be applied to the computer device 8000, and the computer device 8000 may include: a processor 8001, a network interface 8004, and a memory 8005, and the computer device 8000 further includes: a user interface 8003, and at least one communication bus 8002. The communication bus 8002 is used for connection communication between these components. The user interface 8003 may include a Display (Display) and a Keyboard (Keyboard), and the optional user interface 8003 may further include a standard wired interface and a wireless interface. The network interface 8004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). Memory 8005 may be a high-speed RAM memory or a non-volatile memory, such as at least one disk memory. Memory 8005 may optionally also be at least one storage device located remotely from the aforementioned processor 8001. As shown in fig. 9, the memory 8005, which is a kind of computer readable storage medium, may include therein an operating system, a network communication module, a user interface module, and a device control application program.
In the computer device 8000 of fig. 9, a network interface 8004 may provide network communication functions; and user interface 8003 is primarily an interface for providing input to a user; and processor 8001 may be used to invoke a device control application stored in memory 8005 to implement:
displaying a shooting interface for executing a shooting service;
when the shooting interface comprises a shooting object, carrying out focusing processing on the shooting object;
if the shooting object is successfully focused, outputting a vibration prompt aiming at the successful focusing of the shooting object according to the type and/or the position of the shooting object to which the shooting object belongs; the terminal equipment is equipment for displaying a shooting interface.
It should be understood that the computer device 8000 described in this embodiment may perform the description of the data processing method in the embodiment corresponding to fig. 3 to fig. 6, and may also perform the description of the data processing apparatus 1 in the embodiment corresponding to fig. 8, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
Further, here, it is to be noted that: an embodiment of the present application further provides a computer-readable storage medium, where the computer program executed by the aforementioned data processing computer device 8000 is stored in the computer-readable storage medium, and the computer program includes program instructions, and when the processor executes the program instructions, the description of the data processing method in the embodiment corresponding to fig. 3 to fig. 6 can be executed, and therefore, the description will not be repeated here. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in embodiments of the computer-readable storage medium referred to in the present application, reference is made to the description of embodiments of the method of the present application.
The computer-readable storage medium may be the data processing apparatus provided in any of the foregoing embodiments or an internal storage unit of the computer device, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) card, a flash card (flash card), and the like, provided on the computer device. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the computer device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the computer device. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
In one aspect of the application, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the method provided by the aspect in the embodiment of the present application.
The terms "first," "second," and the like in the description and in the claims and drawings of the embodiments of the present application are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "comprises" and any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, apparatus, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or modules recited, but may alternatively include other steps or modules not recited, or may alternatively include other steps or elements inherent to such process, method, apparatus, article, or apparatus.
Those of ordinary skill in the art will appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the components and steps of the various examples have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The method and the related apparatus provided by the embodiments of the present application are described with reference to the flowchart and/or the structural diagram of the method provided by the embodiments of the present application, and specifically, each flow and/or block of the flowchart and/or the structural diagram of the method, and the combination of the flows and/or blocks in the flowchart and/or the block diagram, may be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block or blocks of the block diagram. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block or blocks.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (16)

1. A data processing method, comprising:
displaying a shooting interface for executing a shooting service;
when the shooting interface comprises a shooting object, carrying out focusing processing on the shooting object;
and if the shooting object is successfully focused, outputting a vibration prompt aiming at the successful focusing of the shooting object according to the type and/or the position of the shooting object to which the shooting object belongs.
2. The method of claim 1, wherein the focus success vibratory alert comprises a first vibratory alert and a second vibratory alert;
the outputting of the focusing success vibration prompt for the photographic object according to the type of the photographic object and/or the position of the photographic object to which the photographic object belongs includes:
acquiring the type of a shooting object to which the shooting object belongs and the position of the shooting object in the shooting interface;
when the system time reaches a first moment, outputting a first vibration prompt with a first vibration effect through the terminal equipment; the first vibration effect is used for representing that the shooting object type to which the shooting object belongs to a set object type;
when the system time reaches a second moment, outputting a second vibration prompt with a second vibration effect through the terminal equipment; the second vibration effect is used for representing that the shooting object position where the shooting object is located in the shooting interface belongs to a set object position; the second time is later than the first time.
3. The method of claim 2, wherein outputting, by the terminal device, the first vibratory alert with the first vibratory effect when the system time reaches the first time comprises:
when the system time reaches a first moment, acquiring a type parameter mapping table; the type parameter mapping table comprises a mapping relation between a configuration object type set and a configuration vibration effect parameter set, and a mapping relation exists between one configuration object type in the configuration object type set and one configuration vibration effect parameter in the configuration vibration effect parameter set;
determining a configuration object type which is the same as the shooting object type in the configuration object type set as a first target configuration object type;
determining a configuration vibration effect parameter having a mapping relation with the first target configuration object type in the configuration vibration effect parameter set as a first target vibration effect parameter corresponding to the shooting object type;
and determining the vibration effect indicated by the first target vibration effect parameter as the first vibration effect, and outputting a first vibration prompt in the terminal equipment according to the first target vibration effect parameter.
4. The method according to claim 3, wherein the first target vibration effect parameter comprises N vibration intensity parameters, and the N vibration intensity parameters correspond to vibration orders; n is a positive integer;
outputting a first vibration prompt in the terminal device according to the first target vibration effect parameter, including:
sequencing the N vibration intensity parameters according to the vibration sequence corresponding to the N vibration intensity parameters to obtain an intensity parameter sequence;
and in the terminal equipment, vibration reminding is sequentially carried out according to the vibration intensity parameters in the intensity parameter sequence.
5. The method of claim 3, further comprising:
acquiring an initial type parameter mapping table; the initial type parameter mapping table comprises a mapping relation between the configuration object type set and an initial configuration vibration effect parameter set, and a mapping relation exists between one configuration object type in the configuration object type set and one initial configuration vibration effect parameter in the initial configuration vibration effect parameter set;
when parameter modification information aiming at a second target configuration object type in the configuration object type set sent by the terminal equipment is received, acquiring a second target vibration effect parameter having a mapping relation with the second target configuration object type in the initial configuration vibration effect parameter set; the parameter modification information comprises a modified vibration effect parameter of the second target configuration object type;
replacing the second target vibration effect parameter in the initial configuration vibration effect parameter set with the modified vibration effect parameter to obtain the configuration vibration effect parameter set;
and determining an initial type parameter mapping table containing the mapping relation between the configuration object type set and the configuration vibration effect parameter set as the type parameter mapping table.
6. The method of claim 2, wherein outputting, by the terminal device, a second vibratory alert having a second vibratory effect when the system time reaches a second time comprises:
when the system time reaches a second moment, acquiring a position parameter mapping table; the position parameter mapping table comprises a mapping relation between a configuration object position set and a configuration position vibration parameter set, and a mapping relation exists between one configuration object position in the configuration object position set and one configuration position vibration parameter in the configuration position vibration parameter set;
determining a configuration object position which is the same as the shooting object position in the configuration object position set as a target configuration object position;
determining a configuration position vibration parameter which has a mapping relation with the position of the target configuration object in the configuration position vibration parameter set as a target position vibration parameter corresponding to the position of the shooting object;
and determining the vibration effect indicated by the target position vibration parameter as the second vibration effect, and outputting a second vibration prompt in the terminal equipment according to the target position vibration parameter.
7. The method according to claim 1, wherein the number of the photographic subjects is at least two, and the types of the photographic subjects to which the photographic subjects belong are the same; the focusing success vibration prompt comprises a first vibration prompt, a second vibration prompt and a third vibration prompt;
the outputting of the focusing success vibration prompt aiming at the shooting object according to the shooting object type and/or the shooting object position to which the shooting object belongs comprises:
acquiring the type of a shooting object to which the shooting object belongs and the position of the shooting object in the shooting interface;
when the system time reaches a first moment, outputting the first vibration prompt with a first vibration effect through terminal equipment; the first vibration effect is used for representing that the shooting object type of the shooting object belongs to a set object type;
when the system time reaches a second moment, outputting a second vibration prompt with a second vibration effect through the terminal equipment; the second vibration prompt is used for representing that the shooting object position where the shooting object is located in the shooting interface belongs to a set object position; the second time is later than the first time;
when the system time reaches a third moment, outputting a third vibration prompt with a third vibration effect through the terminal equipment; the third vibration effect is used for representing that the number of the at least two shooting objects is the set object number; the third time is later than the second time.
8. The method according to claim 1, wherein the photographic subject includes a first photographic subject and a second photographic subject, and a photographic subject type to which the first photographic subject belongs is different from a photographic subject type to which the second photographic subject belongs; the focusing success vibration prompt aiming at the shooting object comprises a first object vibration prompt aiming at the first shooting object and a second object vibration prompt aiming at the second shooting object; the first object vibration reminder comprises a first sub vibration reminder and a second sub vibration reminder; the second object vibration prompt comprises a third sub vibration prompt and a fourth sub vibration prompt;
the outputting of the focusing success vibration prompt for the photographic object according to the type of the photographic object and/or the position of the photographic object to which the photographic object belongs includes:
when the system time reaches a first sub-moment, outputting the first sub-vibration prompt with a first sub-vibration effect through terminal equipment; the first sub-vibration effect is used for representing that the shooting object type of the first shooting object belongs to a first set object type;
when the system time reaches a second sub-moment, outputting a second sub-vibration prompt with a second sub-vibration effect through the terminal equipment; the second vibration prompt is used for representing that the shooting object position of the first shooting object in the shooting interface belongs to a first set object position; the second sub-moment is later than the first sub-moment;
when the system time reaches a third sub-moment, outputting a third sub-vibration prompt with a third sub-vibration effect through the terminal equipment; the third sub-vibration effect is used for representing that the shooting object type of the second shooting object belongs to a second set object type; the third sub-time is later than the second sub-time;
when the system time reaches a fourth sub-moment, outputting a fourth sub-vibration prompt with a fourth sub-vibration effect through the terminal equipment; the fourth sub-vibration effect is used for representing that the shooting object position of the second shooting object in the shooting interface belongs to a second set object position; the fourth sub-moment is later than the third sub-moment; the time difference between the fourth sub-time and the third sub-time is the same as the time difference between the second sub-time and the first sub-time.
9. The method of claim 1, further comprising:
acquiring an object area where the shooting object is located in the shooting interface;
acquiring the area corresponding to the object area and the interface area corresponding to the shooting interface;
determining the shooting presentation effect of the shooting object according to the area of the region and the area of the interface;
if the shooting presenting effect is a reasonable presenting effect, outputting a reasonable vibration prompt;
and if the shooting presenting effect is an abnormal presenting effect, outputting an abnormal vibration prompt.
10. The method of claim 9, wherein determining the photographic rendering effect of the photographic object according to the region area and the interface area comprises:
determining an area ratio between the area of the region and the area of the interface;
if the area ratio is larger than a ratio threshold, determining the shooting presenting effect of the shooting object as an abnormal presenting effect;
and if the area ratio is smaller than the ratio threshold, determining the shooting presentation effect of the shot object as a reasonable presentation effect.
11. The method according to any one of claims 1-10, further comprising:
and if the shooting object fails to focus, outputting a vibration prompt aiming at the focusing failure of the shooting object.
12. The method according to any one of claims 1-10, further comprising:
acquiring the type of the shot object to which the shot object belongs;
generating object simulation audio aiming at the shooting object according to the shooting object type;
and synchronously outputting the object simulation audio when outputting the focusing success vibration prompt aiming at the shooting object.
13. The method according to any one of claims 1-10, further comprising:
acquiring the shooting object type of the shooting object;
if the shooting object type is the specified object type, acquiring key position data of the shooting object in the shooting interface;
inputting the key part data into a recognition model, and determining the predicted emotion of the shooting object through the recognition model and the key part data;
and if the predicted emotion is the designated emotion, synchronously outputting the emotion vibration prompt aiming at the predicted emotion when outputting the focusing success vibration prompt aiming at the shooting object.
14. A data processing apparatus, comprising:
the interface display module is used for displaying a shooting interface for executing shooting service;
the focusing module is used for focusing the shooting object when the shooting interface contains the shooting object;
and the vibration reminding module is used for outputting a vibration reminding aiming at the focusing success of the shooting object according to the type and/or the position of the shooting object to which the shooting object belongs if the shooting object is focused successfully.
15. A computer device, comprising: a processor, a memory, and a network interface;
the processor is coupled to the memory and the network interface, wherein the network interface is configured to provide network communication functionality, the memory is configured to store a computer program, and the processor is configured to invoke the computer program to cause the computer device to perform the method of any one of claims 1-13.
16. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program is adapted to be loaded by a processor and to carry out the method of any one of claims 1-13.
CN202210900464.2A 2022-07-28 2022-07-28 Data processing method, device, equipment and readable storage medium Pending CN115242923A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210900464.2A CN115242923A (en) 2022-07-28 2022-07-28 Data processing method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210900464.2A CN115242923A (en) 2022-07-28 2022-07-28 Data processing method, device, equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115242923A true CN115242923A (en) 2022-10-25

Family

ID=83676722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210900464.2A Pending CN115242923A (en) 2022-07-28 2022-07-28 Data processing method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115242923A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202050478U (en) * 2011-02-23 2011-11-23 天津三星光电子有限公司 Digital camera suitable for handicapped
CN104243814A (en) * 2014-07-28 2014-12-24 小米科技有限责任公司 Analysis method for object layout in image and image shoot reminding method and device
CN106603832A (en) * 2016-12-02 2017-04-26 惠州Tcl移动通信有限公司 Intelligent photographing method based on intelligent mobile terminal and system thereof
US20190014255A1 (en) * 2014-06-27 2019-01-10 Nubia Technology Co., Ltd. Focusing state prompting method and shooting device
CN114500847A (en) * 2022-02-12 2022-05-13 北京蜂巢世纪科技有限公司 Focusing prompting method based on vibration motor, intelligent head-mounted display device and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202050478U (en) * 2011-02-23 2011-11-23 天津三星光电子有限公司 Digital camera suitable for handicapped
US20190014255A1 (en) * 2014-06-27 2019-01-10 Nubia Technology Co., Ltd. Focusing state prompting method and shooting device
CN104243814A (en) * 2014-07-28 2014-12-24 小米科技有限责任公司 Analysis method for object layout in image and image shoot reminding method and device
CN106603832A (en) * 2016-12-02 2017-04-26 惠州Tcl移动通信有限公司 Intelligent photographing method based on intelligent mobile terminal and system thereof
CN114500847A (en) * 2022-02-12 2022-05-13 北京蜂巢世纪科技有限公司 Focusing prompting method based on vibration motor, intelligent head-mounted display device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN108377334B (en) Short video shooting method and device and electronic terminal
CN103945121B (en) A kind of information processing method and electronic equipment
US20170285916A1 (en) Camera effects for photo story generation
CN111988528B (en) Shooting method, shooting device, electronic equipment and computer-readable storage medium
US9955134B2 (en) System and methods for simultaneously capturing audio and image data for digital playback
CN112019739A (en) Shooting control method and device, electronic equipment and storage medium
CN105635569B (en) Image pickup method, device and terminal
WO2016088602A1 (en) Information processing device, information processing method, and program
WO2015150889A1 (en) Image processing method and apparatus, and electronic device
WO2019213819A1 (en) Photographing control method and electronic device
CN107948660A (en) The method and device of Video coding adaptation
WO2015184903A1 (en) Picture processing method and device
CN110415318B (en) Image processing method and device
KR101898765B1 (en) Auto Content Creation Methods and System based on Content Recognition Technology
US10645274B2 (en) Server apparatus, distribution system, distribution method, and program with a distributor of live content and a viewer terminal for the live content including a photographed image of a viewer taking a designated body pose
JP2019118021A (en) Shooting control system, shooting control method, program, and recording medium
CN115242923A (en) Data processing method, device, equipment and readable storage medium
CN107197147A (en) The method of controlling operation thereof and device of a kind of panorama camera
CN110602405A (en) Shooting method and device
CN111587570A (en) Information apparatus and camera image sharing system
CN111279683A (en) Shooting control method and electronic device
CN112714299A (en) Image display method and device
KR101921162B1 (en) System for processing image information by image shooting of visually impaired people
CN109688258B (en) Multimedia information transmission method, device, terminal and readable storage medium
JP2020071519A (en) Guide device and guide system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40075773

Country of ref document: HK