CN114629869B - Information generation method, device, electronic equipment and storage medium - Google Patents

Information generation method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114629869B
CN114629869B CN202210270623.5A CN202210270623A CN114629869B CN 114629869 B CN114629869 B CN 114629869B CN 202210270623 A CN202210270623 A CN 202210270623A CN 114629869 B CN114629869 B CN 114629869B
Authority
CN
China
Prior art keywords
information
contact
shooting
application
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210270623.5A
Other languages
Chinese (zh)
Other versions
CN114629869A (en
Inventor
王日健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210270623.5A priority Critical patent/CN114629869B/en
Publication of CN114629869A publication Critical patent/CN114629869A/en
Application granted granted Critical
Publication of CN114629869B publication Critical patent/CN114629869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/06Message adaptation to terminal or network requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses an information generation method, an information generation device, electronic equipment and a storage medium, wherein the method comprises the steps of establishing an association relation between a first shooting object in a shooting preview interface and a first contact in a first application; generating second information in the first application based on the first voice information of the first shooting object acquired in the shooting process; the second information is information corresponding to the first contact person.

Description

Information generation method, device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of terminals, and particularly relates to an information generation method, an information generation device, electronic equipment and a storage medium.
Background
With the rapid development of intelligent terminals, users are increasingly accustomed to recording videos through the intelligent terminals to record interesting contents in work, study or life. The video obtained after the existing video is recorded generally occupies a large space. After the user records the video, if the user wants to share the video with the contact person through the mobile network, the user not only consumes the network traffic of the transmitting end user, but also consumes the network traffic of the receiving end user. In addition, if the quality of the recorded video is not good enough, the user is also prevented from losing the interest of sharing the recorded video.
Disclosure of Invention
The embodiment of the application provides an information generation method, an information generation device, electronic equipment and a storage medium, which are used for solving the problem that a large amount of network traffic is required to be consumed by a user when the user shares recorded videos to a contact user through a mobile network after video recording in the related technology.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides an information generating method, including:
establishing an association relation between a first shooting object in a shooting preview interface and a first contact in a first application;
generating second information in the first application based on the first voice information of the first shooting object acquired in the shooting process;
the second information is information corresponding to the first contact person.
In a second aspect, an embodiment of the present application further provides an information generating apparatus, including:
the association unit is used for establishing an association relation between a first shooting object in the shooting preview interface and a first contact in the first application;
the generation unit is used for generating second information in the first application based on the first voice information of the first shooting object acquired in the shooting process;
The second information is information corresponding to the first contact person.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory stores a program or instructions executable on the processor, the program or instructions implementing the steps of the information generation method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the information generating method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement an information generating method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the information generating method according to the first aspect.
In the embodiment of the application, an association relationship can be established between a first shooting object in a shooting preview interface and a first contact in a first application, and second information in the first application is generated based on first voice information of the first shooting object acquired in a shooting process, wherein the second information is information corresponding to the first contact. On the one hand, when the second information generated based on the first voice information of the first shooting object acquired in the shooting process is stored, only the second information which is the text information or the voice information can be stored, so that the occupied space is greatly reduced; on the other hand, when the second information is shared with other contacts, the consumed network traffic is also greatly reduced, and the video definition does not need to be concerned.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic implementation flow chart of an information generating method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a shooting preview interface in the information generating method provided in the embodiment of the present application;
fig. 3 is a schematic diagram of a target chat page generated in the information generating method provided in the embodiment of the present application;
fig. 4 is a schematic diagram of sharing a generated target chat record to a contact C in the information generating method provided in the embodiment of the present application;
fig. 5 is a schematic structural diagram of an information generating device according to an embodiment of the present application;
fig. 6 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The information generating method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
In order to solve the problem that in the prior art, when a user shares a recorded video to a contact user through a mobile network after the video is recorded, a large amount of network traffic is required to be consumed by both users, the application provides an information generating method, and an execution subject of the method can be but is not limited to a mobile phone, a tablet personal computer and the like, and can be configured to execute at least one of user terminals of the method provided by the embodiment of the application.
For convenience of description, hereinafter, embodiments of the method will be described taking an execution body of the method as an example of a terminal device capable of executing the method. It will be appreciated that the subject of execution of the method is a terminal device which is merely an exemplary illustration and should not be construed as limiting the method.
Specifically, the information generation method provided by the application comprises the following steps: firstly, establishing an association relation between a first shooting object in a shooting preview interface and a first contact in a first application; then, generating second information in a first application based on first voice information of a first shooting object acquired in the shooting process; the second information is information corresponding to the first contact person.
In the embodiment of the application, an association relationship can be established between a first shooting object in a shooting preview interface and a first contact in a first application, and second information in the first application is generated based on first voice information of the first shooting object acquired in a shooting process, wherein the second information is information corresponding to the first contact. On the one hand, when the second information generated based on the first voice information of the first shooting object acquired in the shooting process is stored, only the second information which is the text information or the voice information can be stored, so that the occupied space is greatly reduced; on the other hand, when the second information is shared with other contacts, the consumed network traffic is also greatly reduced, and the video definition does not need to be concerned.
The following describes the implementation process of the method in detail with reference to the specific implementation flow diagram of the information generating method shown in fig. 1, including:
s110, establishing an association relation between a first shooting object in a shooting preview interface and a first contact in a first application.
The shooting preview interface may be a preview interface before video shooting, or may be a preview interface during video shooting. The first photographic subject is a photographic subject that appears in the photographic preview interface. The first application may be an application for chat or social contact, where a contact list added by the user is maintained, where the contact list includes information of a plurality of contacts, such as nicknames, addresses, avatars, and the like of the contacts.
Specifically, the user may choose to associate a photographic object appearing in the photographic preview interface with a contact in the contact list of the first application in the photographic preview interface before the video shooting or in the photographic preview interface during the video shooting. It should be appreciated that the first contact selected from the contact list of the first application is theoretically the same contact as the first photographic object associated in the photographic preview interface.
Taking a shooting preview interface as an example of a preview interface in a video shooting process, when a certain conference is shot, usually, the time of a conference is long, if the video shooting is performed every time of the conference, and the shot video is shared to each conference participant, on one hand, more storage space is occupied, and on the other hand, network traffic of both a sender and a receiver is consumed during sharing. In this case, when each participant in the conference speaks, each participant in the conference may be associated with a contact in the contact list in the first application, so as to convert speech spoken by each conference participant into a voice message or a text message in the chat page of the first application, so that a storage space occupied by directly storing a video shot for the conference can be saved.
Optionally, the user may be provided with an association entry of the first photographic object in the photographic preview interface with the first contact in the first application, i.e. in the photographic preview interface, in case the user first input is received, a contact list of the first application, i.e. at least one contact option in the first application, is displayed, so that the user may manually select a contact from these contact options that matches the photographic object in the photographic preview interface. Specifically, establishing an association between a first shooting object in a shooting preview interface and a first contact in a first application includes:
Receiving a first input of a user under the condition that a shooting preview interface is displayed;
in response to the first input, displaying at least one contact option in the first application;
receiving a second input of a user to a first contact in the at least one contact option;
and responding to the second input, and establishing an association relation between the first shooting object and the first contact person in the shooting preview interface.
In the case of displaying the photographing preview interface, the first photographing object in the photographing preview interface may be determined by a user by clicking or long pressing or the like before the first input. Specifically, a user clicks or presses a certain shooting object in the shooting preview interface for a long time, and when receiving the click or press operation of the user on the certain shooting object in the shooting preview interface, the shooting object is determined to be a first shooting object.
Fig. 2 is a schematic diagram of a shooting preview interface in the information generation method provided in the embodiment of the present application. In fig. 2, when a user is capturing a video, the user may click or press the "add contact" control 201 in the capture preview interface to associate a capture object that appears in the capture preview interface with a contact in the contact list of the first application. As an example, when the user clicks or presses the "add contact" control 201 in the shooting preview interface for a long time, at least one contact option in a contact list of a first application may be displayed, and the user may select a contact matching a shooting object in the shooting preview interface in the contact list of the first application so as to establish an association relationship with the corresponding shooting object in the shooting preview interface.
As an example, in displaying at least one contact option in the contact list of the first application, the user may drag the contact selected by the user to a location displayed by the photographic subject in the photographic preview interface to establish an association between the contact selected by the user and the corresponding photographic subject in the photographic preview interface.
Optionally, in order to improve the association efficiency between the shooting object and the contact person, the association between the shooting object and the contact person can be further realized through an image matching mode. Specifically, establishing an association between a first shooting object in a shooting preview interface and a first contact in a first application includes:
and under the condition that the first shooting object in the shooting preview interface and the first contact person in the first application meet the matching condition, establishing an association relation between the first shooting object and the first contact person.
Specifically, the head portraits of the contacts matched with the shooting objects appearing in the shooting preview interface can be selected from the first application in an image recognition and matching mode, and the contacts are associated with the corresponding shooting objects in the shooting preview interface, namely the binding of the contacts and the corresponding shooting objects in the shooting preview interface is achieved. As an example, when the user clicks or presses the "add contact" control 201 in the shooting preview interface shown in fig. 2 for a long time, an image of the first shooting object may be image-matched with the head portrait of at least one contact in the contact list of the first application, and the head portrait of the first contact matched with the image of the first shooting object may be acquired, and an association relationship may be established between the first contact and the first shooting object.
Optionally, after the association between the first shooting object in the shooting preview interface and the first contact in the first application is established, if the user finds that the association is wrong, the association may be canceled, and the user is supported to select the correct contact from the first application to reestablish the association between the shooting object and the contact. Specifically, after establishing an association between a first shooting object in a shooting preview interface and a first contact in a first application, the method provided by the embodiment of the application further includes:
receiving a third input from the user;
responding to the third input, and releasing the association relation between the first shooting object and the first contact person;
and establishing an association relation between the first shooting object and a second contact in the first application.
As an example, the user may double-click on the first shooting object in the shooting preview interface to release the association relationship between the first shooting object and the first contact person, or may further provide a button for releasing the association relationship for the user in the shooting preview interface, and the user may select the button, may display a list of association relationships between all shooting objects and corresponding contact persons in the shooting preview interface for the user, and may select the association relationship that the user wants to release from among the displayed list of association relationships between all shooting objects and corresponding contact persons in the shooting preview interface.
Optionally, after establishing the association between the first shooting object in the shooting preview interface and the first contact in the first application, the method provided in the embodiment of the present application further includes:
receiving a fourth input of a user to a first shooting object in a shooting preview interface;
responding to the fourth input, and displaying a first identifier corresponding to the first contact person in a preset range of the first shooting object;
receiving a fifth input of a user to the first shooting object;
in response to the fifth input, the first identification is canceled from being displayed.
The first identifier may be information that can uniquely identify the contact, such as an avatar, a name, a nickname, and the like of the contact.
As an example, after the association relationship between the first shooting object in the shooting preview interface and the first contact in the first application is established, the head portrait or other identification of the contact may not appear in the shooting preview interface, and when the user clicks, double clicks or long presses the shooting object in the shooting preview interface, the head portrait or other identification of the contact corresponding to the shooting object in the shooting preview interface may appear around the shooting object, and after the user clicks the shooting object again, the head portrait or other identification of the contact corresponding to the user may disappear in the shooting preview interface.
S120, second information in the first application is generated based on the first voice information of the first shooting object acquired in the shooting process.
The second information is information corresponding to the first contact person.
Optionally, in order to facilitate sharing and discussion of shot videos among shot objects in the shooting preview interface, the embodiments of the present application may further automatically form a target group chat according to the shot objects in the shooting preview interface, where the target group chat includes the shot objects in the shooting preview interface. Specifically, based on the first voice information of the first shooting object collected in the shooting process, generating second information in the first application includes:
creating a target group chat based on the first contact and the second contact;
generating second information in the target group chat based on the first voice information of the first shooting object;
the second contact person is a contact person in the first application, and has an association relationship with a second shooting object in the shooting preview interface.
The second information may be text information or voice information. In addition, other contacts except the shooting object in the shooting preview interface can be added into the target group chat according to the requirement.
Alternatively, in order to preserve the sound information of the photographic subject in the photographing process as much as possible, the sound information of the photographic subject in the photographing process may be converted into the second information according to the start-stop time of the sound information of the photographic subject in the photographing process. Specifically, based on the first voice information of the first shooting object collected in the shooting process, generating second information in the first application includes:
acquiring the start-stop time of the first voice information;
second information is generated based on the start-stop time of the first voice information and the first voice information.
Optionally, after generating the second information in the first application based on the first voice information of the first shooting object acquired in the shooting process, the method provided in the embodiment of the present application further includes:
receiving a sixth input of a user to a shooting preview interface;
in response to the sixth input, a targeted chat page is generated based on the first contact and the second information.
Fig. 3 is a schematic diagram of a target chat page generated in the information generating method provided in the embodiment of the present application. In fig. 3, after binding a contact in a video and a contact in a contact list of a target application is recorded, a user may click on a "generate chat page" control 202 in a shooting preview interface, so that sound information of a contact a in the shooting preview interface may be converted into sound information or text information 301 of a contact a in a target chat page of a first application, and sound information of a contact B in the shooting preview interface may be converted into sound information or text information 302 of a contact B in a target chat page of the first application.
Optionally, in order to provide the user with a more flexible information generating method, the captured partial video content may be further converted into a chat page, specifically, a target chat page is generated based on the first contact and the second information, including:
acquiring a conversion range in a shot video;
and generating a target chat page by using the first contact and the second information in the conversion range in the shot video.
The conversion range in the photographed video may be a partial segment in the photographed video, or may be the entire content of the photographed video.
As an example, in fig. 3, after clicking the "generate chat page" control 202 in the recorded video frame, a conversion range of the recorded video may be provided for the user, and the user may reserve a part of video clips in the recorded video, and other video clips may be converted into a chat page, that is, sound information of a contact in the conversion range of the recorded video is converted into a voice message of the contact in the chat page of the target application.
Optionally, in order to facilitate the sharing operation of the target chat page, after generating the target chat page based on the first contact and the second information, the method provided in the embodiment of the present application further includes:
Receiving a seventh input of the user to the target chat page;
in response to the seventh input, a target chat record is generated based on the target chat page and sent to the target contact in the first application.
The user may also forward the generated target chat page to another contact, e.g., user a forwards the target chat page generated from the information about user a and user B to user C, and then may convert the target chat page to a target chat log and send the target chat log to user C. Fig. 4 is a schematic diagram of sharing a generated target chat record to a contact C in the information generating method provided in the embodiment of the present application. In fig. 4, after receiving the chat log 401 of the user a and the user B, the user C may click on a certain message in the chat log to play the voice.
The sound information of the contact person in the target video is information expressed by sound in the target video.
In the embodiment of the application, an association relationship can be established between a first shooting object in a shooting preview interface and a first contact in a first application, and second information in the first application is generated based on first voice information of the first shooting object acquired in a shooting process, wherein the second information is information corresponding to the first contact. On the one hand, when the second information generated based on the first voice information of the first shooting object acquired in the shooting process is stored, only the second information which can be text information or voice information can be stored, so that the occupied space is greatly reduced; on the other hand, when the second information is shared with other contacts, the consumed network traffic is also greatly reduced, and the problem of video definition does not need to be worried about.
According to the information generation method provided by the embodiment of the application, the execution subject can be the information generation device. In the embodiment of the present application, an information generating apparatus provided in the embodiment of the present application will be described by taking an example in which the information generating apparatus executes an information generating method.
As shown in fig. 5, a schematic structural diagram of an information generating apparatus 500 according to an embodiment of the present application includes:
an association unit 501, configured to establish an association relationship between a first shooting object in a shooting preview interface and a first contact in a first application;
a generating unit 502, configured to generate second information in the first application based on first voice information of the first shooting object acquired in the shooting process;
the second information is information corresponding to the first contact person.
By adopting the information generating device 500 provided by the application, the association relation between the first shooting object in the shooting preview interface and the first contact person in the first application can be established, and the second information in the first application is generated based on the first voice information of the first shooting object acquired in the shooting process, wherein the second information is the information corresponding to the first contact person. On the one hand, when the second information generated based on the first voice information of the first shooting object acquired in the shooting process is stored, only the second information which can be text information or voice information can be stored, so that the occupied space is greatly reduced; on the other hand, when the second information is shared with other contacts, the consumed network traffic is also greatly reduced, and the video definition does not need to be concerned.
Optionally, in an embodiment, the associating unit 501 is configured to:
receiving a first input of a user under the condition that a shooting preview interface is displayed;
in response to the first input, displaying at least one contact option in a first application;
receiving a second input of a user to a first contact in the at least one contact option;
and responding to the second input, and establishing an association relation between a first shooting object in the shooting preview interface and the first contact person.
Optionally, in an embodiment, the associating unit 501 is configured to:
and under the condition that a first shooting object in the shooting preview interface and a first contact person in a first application meet a matching condition, establishing an association relation between the first shooting object and the first contact person.
Optionally, in an embodiment, after the association unit 501 establishes an association between the first shooting object in the shooting preview interface and the first contact in the first application, the apparatus further includes:
a first receiving unit for receiving a third input of a user;
a releasing unit configured to release an association relationship between the first photographic subject and the first contact in response to the third input;
The establishing unit is used for establishing an association relation between the first shooting object and the second contact person in the first application.
Optionally, in an embodiment, after the association unit 501 establishes an association between the first shooting object in the shooting preview interface and the first contact in the first application, the apparatus further includes:
a second receiving unit configured to receive a fourth input of the first photographic subject in the photographic preview interface from a user;
the display unit is used for responding to the fourth input and displaying a first identifier corresponding to the first contact person in a preset range of the first shooting object;
a third receiving unit configured to receive a fifth input from a user to the first photographic subject;
and the cancellation unit is used for canceling the display of the first identification in response to the fifth input.
Optionally, in an embodiment, the generating unit 502 is configured to:
creating a target group chat based on the first contact and the second contact;
generating second information in the target group chat based on the first voice information of the first shooting object;
the second contact is a contact in the first application, and the second contact has an association relationship with a second shooting object in the shooting preview interface.
Optionally, in an embodiment, the generating unit 502 is configured to:
acquiring the start-stop time of the first voice information;
and generating the second information based on the start-stop time of the first voice information and the first voice information.
Optionally, in an embodiment, after the generating unit 502 generates the second information in the first application based on the first voice information of the first shooting object acquired in the shooting process, the apparatus further includes:
a fourth receiving unit, configured to receive a sixth input from a user to the shooting preview interface;
and the page generation unit is used for responding to the sixth input and generating a target chat page based on the first contact person and the second information.
Optionally, in one embodiment, after the page generating unit generates the target chat page based on the first contact and the second information, the apparatus further includes:
a fifth receiving unit, configured to receive a seventh input from a user to the target chat page;
and the record generation unit is used for responding to the seventh input, generating a target chat record based on the target chat page and sending the target chat record to the target contact person in the first application.
The information generating device in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The information generating device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The information generating device provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to fig. 4, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 6, the embodiment of the present application further provides an electronic device M06, including a processor M61 and a memory M62, where a program or an instruction capable of running on the processor M61 is stored in the memory M62, and the program or the instruction implements each step of the above-mentioned information generating method embodiment when being executed by the processor M61, and can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: radio frequency unit 701, network module 702, audio output unit 703, input unit 704, sensor 705, display unit 706, user input unit 707, interface unit 708, memory 709, and processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 710 via a power management system so as to perform functions such as managing charge, discharge, and power consumption via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 710 establishes an association between a first shooting object in the shooting preview interface and a first contact in the first application; generating second information in the first application based on the first voice information of the first shooting object acquired in the shooting process; the second information is information corresponding to the first contact person.
Optionally, a user input unit 707 is configured to receive a first input of a user in a case where a shooting preview interface is displayed; a display unit 706 for displaying at least one contact option in the first application in response to the first input; a user input unit 707 for receiving a second input from a user to a first contact in the at least one contact option; and the processor 710 is configured to establish an association between the first photographic object in the photographic preview interface and the first contact in response to the second input.
Optionally, the processor 710 is further configured to establish an association relationship between a first shooting object in the shooting preview interface and a first contact in a first application if the first shooting object and the first contact meet a matching condition.
Optionally, the user input unit 707 is further configured to receive a third input from the user; the processor 710 is further configured to, in response to the third input, release an association relationship between the first photographic object and the first contact; and establishing an association relation between the first shooting object and a second contact in the first application.
Optionally, the user input unit 707 is further configured to receive a fourth input from a user to the first shooting object in the shooting preview interface; a display unit 706, configured to display, in response to the fourth input, a first identifier corresponding to the first contact within a preset range of the first shooting object; a user input unit 707 further configured to receive a fifth input from a user to the first shooting object; and a display unit 706, configured to cancel display of the first identifier in response to the fifth input.
Optionally, the processor 710 is further configured to create a target group chat based on the first contact and the second contact; generating second information in the target group chat based on the first voice information of the first shooting object; the second contact is a contact in the first application, and the second contact has an association relationship with a second shooting object in the shooting preview interface.
Optionally, the processor 710 is further configured to obtain a start-stop time of the first voice information; and generating the second information based on the start-stop time of the first voice information and the first voice information.
Optionally, the user input unit 707 is further configured to receive a sixth input from the user to the shooting preview interface; processor 710 is further configured to generate a target chat page based on the first contact and the second information in response to the sixth input.
Optionally, the user input unit 707 is further configured to receive a seventh input from the user on the target chat page; processor 710 is further configured to generate a target chat record based on the target chat page in response to the seventh input, and send the target chat record to a target contact in the first application.
By adopting the electronic equipment provided by the application, the association relation between the first shooting object in the shooting preview interface and the first contact person in the first application can be established, and the second information in the first application is generated based on the first voice information of the first shooting object acquired in the shooting process, wherein the second information is the information corresponding to the first contact person. On the one hand, when the second information generated based on the first voice information of the first shooting object acquired in the shooting process is stored, only the second information which can be text information or voice information can be stored, so that the occupied space is greatly reduced; on the other hand, when the second information is shared with other contacts, the consumed network traffic is also greatly reduced, and video definition is not needed.
It should be appreciated that in embodiments of the present application, the input unit 704 may include a graphics processor (Graphics Processing Unit, GPU) 7041 and a microphone 7042, with the graphics processor 7041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes at least one of a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts, a touch detection device and a touch processor. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 709 may include volatile memory or nonvolatile memory, or the memory x09 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 709 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 710 may include one or more processing units; optionally, processor 710 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the processes of the embodiment of the information generating method are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is configured to run a program or an instruction, implement each process of the above information generating method embodiment, and achieve the same technical effect, so as to avoid repetition, and not be repeated here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product, which is stored in a storage medium, and the program product is executed by at least one processor to implement the respective processes of the embodiments of the information generating method, and achieve the same technical effects, and are not described herein again for avoiding repetition.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (11)

1. An information generation method, comprising:
establishing an association relation between a first shooting object in a shooting preview interface and a first contact in a first application;
generating second information in the first application based on the first voice information of the first shooting object acquired in the shooting process;
the second information is information corresponding to the first contact person;
after generating the second information in the first application based on the first voice information of the first shooting object acquired in the shooting process, the method further includes:
receiving a sixth input of a user to the shooting preview interface;
responsive to the sixth input, a targeted chat page is generated based on the first contact and the second information.
2. The method of claim 1, wherein the associating the first photographic subject in the photographic preview interface with the first contact in the first application comprises:
receiving a first input of a user under the condition that the shooting preview interface is displayed;
in response to the first input, displaying at least one contact option in a first application;
receiving a second input of a user to a first contact in the at least one contact option;
And responding to the second input, and establishing an association relation between a first shooting object in the shooting preview interface and the first contact person.
3. The method of claim 1, wherein the associating the first photographic subject in the photographic preview interface with the first contact in the first application comprises:
and under the condition that a first shooting object in the shooting preview interface and a first contact person in a first application meet a matching condition, establishing an association relation between the first shooting object and the first contact person.
4. The method of claim 1, wherein after the associating the first photographic subject in the photographic preview interface with the first contact in the first application, the method further comprises:
receiving a third input from the user;
responding to the third input, and releasing the association relation between the first shooting object and the first contact person;
and establishing an association relation between the first shooting object and a second contact in the first application.
5. The method of claim 1, wherein after the associating the first photographic subject in the photographic preview interface with the first contact in the first application, the method further comprises:
Receiving a fourth input of a user to the first shooting object in the shooting preview interface;
responding to the fourth input, and displaying a first identifier corresponding to the first contact person in a preset range of the first shooting object;
receiving a fifth input of a user to the first shooting object;
in response to the fifth input, the first identification is canceled from being displayed.
6. The method of claim 1, wherein the generating the second information in the first application based on the first voice information of the first subject acquired during the capturing includes:
creating a target group chat based on the first contact and the second contact;
generating second information in the target group chat based on the first voice information of the first shooting object;
the second contact is a contact in the first application, and the second contact has an association relationship with a second shooting object in the shooting preview interface.
7. The method of claim 1, wherein generating the second information in the first application based on the first voice information of the first subject acquired during the capturing comprises:
Acquiring the start-stop time of the first voice information;
and generating the second information based on the start-stop time of the first voice information and the first voice information.
8. The method of claim 1, wherein after the generating a target chat page based on the first contact and the second information, the method further comprises:
receiving a seventh input of a user to the target chat page;
and generating a target chat record based on the target chat page in response to the seventh input, and sending the target chat record to a target contact in the first application.
9. An information generating apparatus, comprising:
the association unit is used for establishing an association relation between a first shooting object in the shooting preview interface and a first contact in the first application;
the generation unit is used for generating second information in the first application based on the first voice information of the first shooting object acquired in the shooting process;
the second information is information corresponding to the first contact person;
the apparatus further comprises:
a fourth receiving unit, configured to receive a sixth input from a user to the shooting preview interface;
And the page generation unit is used for responding to the sixth input and generating a target chat page based on the first contact person and the second information.
10. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor implements the steps of the information generating method as claimed in any one of claims 1 to 8.
11. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the information generating method according to any of claims 1-8.
CN202210270623.5A 2022-03-18 2022-03-18 Information generation method, device, electronic equipment and storage medium Active CN114629869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210270623.5A CN114629869B (en) 2022-03-18 2022-03-18 Information generation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210270623.5A CN114629869B (en) 2022-03-18 2022-03-18 Information generation method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114629869A CN114629869A (en) 2022-06-14
CN114629869B true CN114629869B (en) 2024-04-16

Family

ID=81901493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210270623.5A Active CN114629869B (en) 2022-03-18 2022-03-18 Information generation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114629869B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103685726A (en) * 2012-09-21 2014-03-26 三星电子株式会社 System for transmitting image and associated message data
CN103677609A (en) * 2012-09-24 2014-03-26 华为技术有限公司 Picture processing method and terminal
CN104702845A (en) * 2015-03-16 2015-06-10 西安酷派软件科技有限公司 End-to-end interactive shooting method, device and relevant equipments
KR20160131720A (en) * 2015-05-08 2016-11-16 엘지전자 주식회사 Mobile terminal and method for controlling the same
CN106888155A (en) * 2017-01-21 2017-06-23 上海量明科技发展有限公司 Information gathering and shared method, client and system
CN107529031A (en) * 2017-08-18 2017-12-29 广州视源电子科技股份有限公司 A kind of recording method, device, equipment and the storage medium of writing on the blackboard process
CN108874258A (en) * 2017-05-11 2018-11-23 腾讯科技(深圳)有限公司 Share the method and device of record screen video
WO2019227309A1 (en) * 2018-05-29 2019-12-05 深圳市大疆创新科技有限公司 Tracking photographing method and apparatus, and storage medium
KR20200025193A (en) * 2018-08-29 2020-03-10 주식회사 꿈많은청년들 Recording Midium
CN112698767A (en) * 2019-10-23 2021-04-23 腾讯科技(深圳)有限公司 Multimedia file sending method and device and electronic equipment
CN113014732A (en) * 2021-02-04 2021-06-22 腾讯科技(深圳)有限公司 Conference record processing method and device, computer equipment and storage medium
CN113727021A (en) * 2021-08-27 2021-11-30 维沃移动通信(杭州)有限公司 Shooting method and device and electronic equipment
CN113873165A (en) * 2021-10-25 2021-12-31 维沃移动通信有限公司 Photographing method and device and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110271209A1 (en) * 2010-04-30 2011-11-03 American Teleconferncing Services Ltd. Systems, Methods, and Computer Programs for Providing a Conference User Interface
US8626847B2 (en) * 2010-04-30 2014-01-07 American Teleconferencing Services, Ltd. Transferring a conference session between client devices
WO2021051024A1 (en) * 2019-09-11 2021-03-18 Educational Vision Technologies, Inc. Editable notetaking resource with optional overlay
CN113497909B (en) * 2020-03-18 2022-12-02 华为技术有限公司 Equipment interaction method and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103685726A (en) * 2012-09-21 2014-03-26 三星电子株式会社 System for transmitting image and associated message data
CN103677609A (en) * 2012-09-24 2014-03-26 华为技术有限公司 Picture processing method and terminal
CN104702845A (en) * 2015-03-16 2015-06-10 西安酷派软件科技有限公司 End-to-end interactive shooting method, device and relevant equipments
KR20160131720A (en) * 2015-05-08 2016-11-16 엘지전자 주식회사 Mobile terminal and method for controlling the same
CN106888155A (en) * 2017-01-21 2017-06-23 上海量明科技发展有限公司 Information gathering and shared method, client and system
CN108874258A (en) * 2017-05-11 2018-11-23 腾讯科技(深圳)有限公司 Share the method and device of record screen video
CN107529031A (en) * 2017-08-18 2017-12-29 广州视源电子科技股份有限公司 A kind of recording method, device, equipment and the storage medium of writing on the blackboard process
WO2019227309A1 (en) * 2018-05-29 2019-12-05 深圳市大疆创新科技有限公司 Tracking photographing method and apparatus, and storage medium
KR20200025193A (en) * 2018-08-29 2020-03-10 주식회사 꿈많은청년들 Recording Midium
CN112698767A (en) * 2019-10-23 2021-04-23 腾讯科技(深圳)有限公司 Multimedia file sending method and device and electronic equipment
CN113014732A (en) * 2021-02-04 2021-06-22 腾讯科技(深圳)有限公司 Conference record processing method and device, computer equipment and storage medium
CN113727021A (en) * 2021-08-27 2021-11-30 维沃移动通信(杭州)有限公司 Shooting method and device and electronic equipment
CN113873165A (en) * 2021-10-25 2021-12-31 维沃移动通信有限公司 Photographing method and device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Android平台动态图片管理系统的研究与实现;孔令美;;计算机光盘软件与应用;20141115(第22期);102-103 *
网络共享 无处不在;刘勇;胡服骑射;阿刚;;电脑迷;20070501(第09期);47-53 *

Also Published As

Publication number Publication date
CN114629869A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
CN106921560B (en) Voice communication method, device and system
CN112770059B (en) Photographing method and device and electronic equipment
CN114500432A (en) Session message transceiving method and device, electronic equipment and readable storage medium
CN112261218B (en) Video control method, video control device, electronic device and readable storage medium
CN106604147A (en) Video processing method and apparatus
WO2022135323A1 (en) Image generation method and apparatus, and electronic device
CN112532885A (en) Anti-shake method and device and electronic equipment
CN114153362A (en) Information processing method and device
CN112533052A (en) Video sharing method and device, electronic equipment and storage medium
CN114629869B (en) Information generation method, device, electronic equipment and storage medium
CN113518143B (en) Interface input source switching method and device, electronic equipment and storage medium
CN115473867A (en) Message sending method and device, electronic equipment and storage medium
CN112637508B (en) Camera control method and device and electronic equipment
CN112565913B (en) Video call method and device and electronic equipment
CN115103231A (en) Video call method and device, first electronic equipment and second electronic equipment
CN109889662B (en) Information forwarding method and device, electronic equipment and computer readable storage medium
CN113938759A (en) File sharing method and file sharing device
CN111857467B (en) File processing method and electronic equipment
CN115623321A (en) Message processing method and device, electronic equipment and readable storage medium
CN117750177A (en) Image display method, device, electronic equipment and medium
CN116916147A (en) Image processing method, image sending device and electronic equipment
CN117319544A (en) Message processing method, device, electronic equipment and readable storage medium
CN117294925A (en) Method, device, equipment and storage medium for displaying cloud mobile phone video
CN113485670A (en) Voice information processing method and device and electronic equipment
CN115052107A (en) Shooting method, shooting device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant