CN115714877B - Multimedia information processing method and device, electronic equipment and storage medium - Google Patents

Multimedia information processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115714877B
CN115714877B CN202211440398.1A CN202211440398A CN115714877B CN 115714877 B CN115714877 B CN 115714877B CN 202211440398 A CN202211440398 A CN 202211440398A CN 115714877 B CN115714877 B CN 115714877B
Authority
CN
China
Prior art keywords
user
image
scene image
scene
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211440398.1A
Other languages
Chinese (zh)
Other versions
CN115714877A (en
Inventor
耿弘毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202211440398.1A priority Critical patent/CN115714877B/en
Publication of CN115714877A publication Critical patent/CN115714877A/en
Application granted granted Critical
Publication of CN115714877B publication Critical patent/CN115714877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a multimedia information processing method, a device, electronic equipment and a storage medium, and belongs to the technical field of intelligent display of image information. The specific implementation mode of the method comprises the following steps: in response to detecting a video generation request for a first user identifier, acquiring a first user scene image set corresponding to the first user identifier from a user multimedia database; presenting the first set of user scene images; in response to detecting a selection operation for at least one first user scene image in the first set of user scene images, generating a first user scene image review video based on each first user scene image selected; and playing the first user scene image review video, wherein the implementation enriches the functional diversity of the place information display electronic equipment and also improves the user viscosity between the user and the place.

Description

Multimedia information processing method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of intelligent display of image information, in particular to a multimedia information processing method, a device, electronic equipment and a storage medium.
Background
Currently, most venues provide promotional indications of the venue, such as by securing items or electronic equipment displays; however, even though the electronic device is used for displaying, the functions are often single, for example, only which address units related to the location, the specific introduction information of the address units, etc., in the display scene, the display device in a fixed mode cannot accurately position to the needs of each person, even the so-called intelligent device can only rely on the preset program to display the corresponding multimedia information, in the face of more complex field conditions, such as man-machine interaction or related information retrieval, etc., the traditional electronic display device has difficulty in accurately capturing the field conditions, and in accurately identifying and acquiring the needs and basic information of the user, and the corresponding display service and related service cannot achieve the accurate service, so how to solve the above-mentioned problems becomes the current urgent need problem.
Disclosure of Invention
The invention aims to provide a multimedia information processing method, a device, an electronic device and a storage medium, wherein under the condition that a first user hopes to review a video generation request of the first user at a place, corresponding review videos are generated and played by utilizing a user image of the first user at the place, so that the functional diversity of place information display electronic equipment is enriched, and the user viscosity between the user and the place is improved.
In order to achieve the above purpose, the present invention provides the following technical solutions: the multimedia information processing method is applied to intelligent image display and comprises the steps of responding to detection of a video generation request aiming at a first user identifier, and acquiring a first user scene image set corresponding to the first user identifier from a user multimedia database; presenting the first set of user scene images; in response to detecting a selection operation for at least one first user scene image in the first set of user scene images, generating a first user scene image review video based on each first user scene image selected; and playing the first user scene image review video.
Preferably, the method further comprises: in response to detecting an export operation for the first user-scene-image review video and a target export address, the first user-scene-image review video is sent to an electronic device corresponding to the target export address.
Preferably, the first user identification is obtained by: receiving name keywords input by a first user; inquiring user information matched with the name keywords in a user information database; and determining the user identification corresponding to the queried user information as the first user identification.
Preferably, the first user identification is obtained by: acquiring a user photo acquired in real time; matching the user photo with a user image in a user multimedia database; and in response to the existence of the user image matched with the user photo, determining a user identification corresponding to the user image matched with the user photo as the first user identification.
Preferably, the method further comprises: and in response to the absence of a user image matching the user photo, presenting first prompt information for indicating that the first user takes the photo on site and then generates a review video again.
Preferably, the method further comprises: in response to detecting an image synthesis operation of a second user on a second user image and a target scene image, extracting a face image in the second user image, and merging the extracted face image into the target scene image to generate a second user scene image; and presenting the second user scene image.
Preferably, the target scene image is obtained by: presenting each scene image in a preset scene image set; in response to detecting a selection operation by the second user of a scene image of the presented scene images, determining the selected scene image as the target scene image.
Preferably, the second user image is obtained by: acquiring a camera real-time acquisition image as a second user image; or acquiring an image uploaded by the second user as a second user image.
Preferably, the method further comprises: and correspondingly storing the second user scene image and the second user identification into the user multimedia database.
Preferably, the method further comprises: responding to the detection of the video playing request, and presenting a video playing directory interface; and responding to the detection of the playing request for the target video in the video playing directory interface, and acquiring and playing the target video.
Preferably, the method comprises: an acquisition unit configured to acquire a first user scene image set corresponding to a first user identifier in a user multimedia database in response to detection of a video generation request for the first user identifier; a presentation unit configured to present the first set of user scene images; a generation unit configured to generate a first user scene image review video based on each selected first user scene image in response to detecting a selection operation for at least one first user scene image in the first set of user scene images; and a playing unit configured to play the first user scene image review video.
Preferably, the method comprises: one or more processors; storage means having stored thereon one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-10.
Preferably, the computer program, when executed by one or more processors, implements the method of any of claims 1-10.
Compared with the prior art, the invention has the beneficial effects that:
(1) According to the invention, the matched hardware and the data processing method software are integrated together, so that the functional diversity of the existing social place information display equipment is effectively improved, the basic information of the first user is detected and identified, then the intention and the instruction of the first user are acquired, and under the condition that the first user hopes to review the video generation request of the first user in the place, the user image of the first user in the place is utilized to generate and play the corresponding reviewed video.
(2) The multimedia information processing method is provided with corresponding hardware facilities, basic information of the first user can be obtained through video image equipment, then application scenes are synthesized, and the demand direction of the first user is obtained according to instruction information manually input by the first user or through communication of voice language, limb language and the like by combining with a network so as to provide corresponding services.
(3) The invention is applicable to places such as visitor origin, present work units and workplaces, and can be used for conference center image introduction, intelligent guide of market, propaganda introduction of sample display exhibition, and the like, for example, the invention can realize the time recall effect of 'people leaving the name and wild goose leaving the voice' for all original staff and present staff who work or visit the present work units or the present work places, short-term service staff and important guests.
Drawings
FIG. 1 is a diagram of an exemplary system architecture of the present invention;
FIG. 2 is a flow chart of one embodiment of a method of multimedia information processing;
FIG. 3 is a schematic diagram of a structure of an embodiment of a multimedia information processing apparatus;
FIG. 4 is a schematic diagram of a computer system of an electronic device according to an embodiment of the invention
In the figure: 101. a mobile phone terminal; 102. a tablet terminal; 103. a computer terminal; 104. a network; 105. a server; 201. step one, a step one; 202. step two, a step two is carried out; 203. step three, a step of performing; 204. step four, a step four is carried out; 205. step five, a step of performing a step of; 206. step six, a step of performing a step of; 207. step seven, a step of performing a step of; 208. step eight, performing step eight; 209. step nine; 210. step ten; 301. an acquisition unit; 302. a presentation unit; 303. a generating unit; 304. a playing unit; 401. a processing device; 402. a ROM; 403. a RAM; 404. through the bus; 405. an interface; 406. an input device; 407. an output device; 408. a storage device; 409. a communication device.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without one or more of these details. In other instances, well-known features have not been described in detail in order to avoid obscuring the invention. In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
Furthermore, the terms "mounted," "configured," "provided," "connected," "coupled," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; may be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements, or components. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Embodiment one:
as shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105; the network 104 is a medium for providing a communication link between the terminal devices 101, 102, 103 and the server 105; the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 105 through the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like; various communication client applications, such as a natural language processing class application, a voice recognition class application, a database application, a web browser application, a shopping class application, a search class application, an instant messaging tool, a mailbox client, social platform software, and the like, may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be hardware or software; when the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a sound collection device, a video collection device, and a display screen, including but not limited to smart phones, tablet computers, electronic book readers, MP3 players, MP4 players, laptop portable computers, desktop computers, and the like; when the terminal apparatuses 101, 102, 103 are software, they can be installed in the above-listed terminal apparatuses; which may be implemented as a plurality of software or software modules, or as a single software or software module.
In some cases, the provided multimedia information processing method may be performed by the terminal devices 101, 102, 103, and accordingly, the multimedia information processing apparatus may be provided in the terminal devices 101, 102, 103; in this case, the system architecture 100 may not include the server 105; in some cases, the provided multimedia information processing method may be jointly executed by the terminal device 101, 102, 103 and the server 105, for example, the steps of detecting a video generation request for a first user identifier, presenting the first set of user scene images, etc. may be executed by the terminal device 101, 102, 103, the steps of acquiring the first set of user scene images corresponding to the first user identifier in the user multimedia database, etc. may be executed by the server 105; accordingly, the multimedia information processing apparatus may also be provided in the terminal devices 101, 102, 103 and the server 105, respectively.
The server 105 may be hardware or software; when the server 105 is hardware, it may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server; when server 105 is software, it may be implemented as a plurality of software or software modules or as a single software or software module; it should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative; there may be any number of terminal devices, networks, and servers, as desired for implementation.
Embodiment two:
as shown in fig. 2, a flow 200 according to one embodiment of a multimedia information processing method is shown; the multimedia information processing method comprises the following steps:
in step 201, in response to detecting a video generation request for a first user identifier, a first set of user scene images corresponding to the first user identifier is obtained in a user multimedia database.
In this embodiment, the execution body of the multimedia information processing method may acquire, in the user multimedia database, the first user scene image set corresponding to the first user identifier when the video generation request for the first user identifier is detected.
User multimedia data of a user indicated by the user identifier and the corresponding user identifier are correspondingly stored in the user multimedia database; the user image may be multimedia data of the user at the location where the execution subject corresponds to the terminal device; the multimedia data may be, for example, images or video.
The first user identification may be determined in various ways; the first user identification may be determined by: firstly, receiving name keywords input by a first user; secondly, the user information matching with the name keyword is queried in the user information database.
The user information database stores the corresponding relation between the user identification and the name; finally, determining the user identification corresponding to the queried user information as a first user identification; by the method, a user does not need to memorize own user identification and only needs to input name keywords, so that the operation complexity of the user is reduced; the first user identification may also be determined by: first, a user photo taken in real time is obtained.
The user photo can be acquired in real time from a camera which is in communication connection with the execution subject; secondly, matching the user photo with a user image in a user multimedia database; finally, in response to the existence of the user image matched with the user photo, determining a user identification corresponding to the user image matched with the user photo as a first user identification; by adopting the optional mode, the security of identity verification of the user can be improved by adopting the face image for identity verification.
When the user picture is matched with the user image in the user multimedia database, if the user image matched with the user picture does not exist, a first prompt for indicating the first user to take the picture on site and then generating a review video again is presented.
Step 202, presenting a first set of user scene images; the executing body may present the first set of user scene images after the first set of user scene images is obtained in step 201; the first set of user scene images may be presented, for example, on the above-described display device performing subject connection.
In response to detecting a selection operation for at least one first user scene image in the first set of user scene images, a first user scene image review video is generated based on each first user scene image selected, step 203.
The executing body may further perform a selection operation on the presented first user scene image when presenting the first user scene image set in step 202.
If a selection operation for at least one first user scene image in the first set of user scene images is detected, a first user scene image review video may be generated based on each first user scene image selected; various methods of generating video based on images may be employed, as this application is not particularly limited.
Step 204, playing a first user scene image review video; in some embodiments, the foregoing execution body may further execute the following step 205 after executing the step 204.
Step 205, in response to detecting the export operation for the first user scene image review video and the target export address, sending the first user scene image review video to the electronic device corresponding to the target export address; the target derived address may be various addresses; for example, an email address; or a specific folder address and the like of the electronic equipment where the execution main body is located; through step 205, the first user scene image review video can be exported according to the designated address, so that the multimedia information processing function is enriched; in some embodiments, the foregoing execution body may further execute the following steps 206 and 207 after executing the step 204.
Step 206, in response to detecting the image synthesis operation of the second user on the second user image and the target scene image, extracting a face image in the second user image, and merging the extracted face image into the target scene image to generate the second user scene image; the second user image may be, for example, an image comprising a second user face image; the target scene image may be a scene image of a different scene in the place corresponding to the geographic location of the execution subject.
Step 207, presenting a second user scene image; in this alternative manner, the user face image may be incorporated into the target scene image, and the image of the second user arriving at the target scene may be presented to the user, to achieve the user's feeling of being on the scene, via step 206.
The target scene image may be obtained by: firstly, presenting each scene image in a preset scene image set; the scene images in the preset scene image set may be scene images of different scenes in the place corresponding to the geographic position of the execution subject; then, in response to detecting a selection operation of the second user for a scene image in the presented scene images, determining the selected scene image as a target scene image; i.e. in this alternative the second user may select the target scene image from the set of preset scene images.
The second user image may be obtained by:
acquiring a camera real-time acquisition image as a second user image, or acquiring an image uploaded by a second user as a second user image; in this alternative, the second user may designate his user image in a number of ways to facilitate subsequent use in generating the second user scene image; in some embodiments, the foregoing execution body may further execute the following step 208 after executing the step 207.
Step 208, storing the second user scene image and the second user identification in the user multimedia database correspondingly; through step 208, the second user scene image generated in step 207 can be correspondingly stored in the user multimedia database, so that the second user can conveniently review the second user; in some embodiments, the above-described execution body may further execute the following steps 209 and 210.
In response to detecting the video play request, a video play directory interface is presented, step 209.
Step 210, in response to detecting a play request for a target video in the video play directory interface, acquiring and playing the target video.
And the video associated with the place corresponding to the geographic position of the execution subject is associated in the video play catalog interface. Furthermore, the user can select all the related video contents of the venue to play, so that the user can know the venue through the video conveniently, and the functions of the electronic equipment where the executive main body is located are enriched.
Under the condition that a first user hopes to review a video generation request of the first user at the place, corresponding review videos are generated and played by utilizing a user image of the first user at the place, so that the functional diversity of the place information display electronic equipment is enriched, and the user viscosity between the user and the place is improved.
Embodiment III:
as an implementation of the method shown in the above figures, an embodiment of a multimedia information processing apparatus is provided, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 3, the multimedia information processing apparatus 300 includes: an acquisition unit 301, a presentation unit 302, a generation unit 303, and a playback unit 304. Wherein, the obtaining unit 301 is configured to obtain, in response to detecting a video generation request for a first user identifier, a first user scene image set corresponding to the first user identifier in a user multimedia database; a presentation unit 302 configured to present the first set of user scene images; a generation unit 303 configured to generate a first user scene image review video based on each selected first user scene image in response to detecting a selection operation for at least one first user scene image in the first set of user scene images; a play unit 304 configured to play back the first user scene image review video;
in this embodiment, the specific processes of the obtaining unit 301, the presenting unit 302, the generating unit 303 and the playing unit 304 of the multimedia information processing apparatus 300 and the technical effects thereof may refer to the descriptions related to the steps 201, 202, 203 and 204 in the corresponding embodiment of fig. 2, respectively.
Embodiment four:
a schematic diagram of a computer system 400 suitable for use in implementing an electronic device is shown.
As shown in fig. 4, the computer system 400 may include a processing device 401 that may perform various suitable actions and processes in accordance with a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403; in the RAM403, various programs and data required for the operation of the computer system 400 are also stored.
The processing device 401, the ROM 402, and the RAM403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
The following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, magnetic tape, hard disk, etc.; a communication device 409; the communications apparatus 409 may allow the computer system 400 to communicate wirelessly or by wire with other devices to exchange data; while FIG. 4 illustrates a computer system 400 having electronic devices with various means, it should be understood that not all illustrated means are required to be implemented or provided; more or fewer devices may be implemented or provided instead.
Fifth embodiment:
the hardware display facility has the functions of video image display, sound and image data acquisition, networking data processing and feedback, and is placed at the position of a designated area, a user wakes up the hardware display facility through sound, switch keys or a touch screen, the hardware display facility obtains user image information through photographing, then the hardware display facility compares the image information through a networking docking background database, combines the voiceprint comparison of the user, obtains the real identity of the user, and can also obtain the requirement and intention of the user through limb actions in specific occasions.
The background database is characterized in that ten basic data are reserved in each of image information and voiceprint information, the basic data can be selectively expanded when necessary, namely, the basic data are divided into at least three basic data, three common basic data and four daily update data, the basic data are such as credentials, the common basic data are such as workbooks, the daily update data are such as identified image information such as living books, in order to ensure the reality and effectiveness of the data, the daily update data are set in a progressive update way, namely, the latest identified data information before data information replacement is influenced, the voiceprint data identification is the same, ten or more data are put together for comparison in the identification process, the data with low similarity are directly replaced by new data, and the replaced data are stored in the background database.
Besides the active identification, a user can also manually input a name to search and inquire data, and then select the requirements according to the personal information bar.
Selecting image information for playing, and starting playing videos and image shortages of users by a hardware display facility, wherein music is accompanied during the playing, such as xx, the same meaning: fierce you visit the first tour court; next play your work photo during the first tour court we made for you; let us commemorate the working time together, recall the beautiful time; then a photograph of two or three minutes is continuously scrolled and played, and light music is matched; after the playing is finished, the voice and the screen prompt at the same time: and continuing playing the button 1, downloading the button to the U disk commemorative button 2, and exiting the button 3.
The above embodiments are only preferred embodiments of the present invention, and are not limiting to the technical solutions of the present invention, and any technical solution that can be implemented on the basis of the above embodiments without inventive effort should be considered as falling within the scope of protection of the patent claims of the present invention.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (9)

1. A multimedia information processing method is applied to intelligent display of images, and is characterized in that: the method comprises the steps of responding to a video generation request aiming at a first user identifier, and acquiring a first user scene image set corresponding to the first user identifier from a user multimedia database; presenting the first set of user scene images; in response to detecting a selection operation for at least one first user scene image in the first set of user scene images, generating a first user scene image review video based on each first user scene image selected; and playing the first user scene image review video;
the method further comprises the steps of: in response to detecting an export operation for the first user-scene-image review video and a target export address, sending the first user-scene-image review video to an electronic device corresponding to the target export address;
the first user identification is obtained by the following steps: receiving name keywords input by a first user; inquiring user information matched with the name keywords in a user information database; determining a user identifier corresponding to the queried user information as the first user identifier; or the first user identification is obtained by the following way: acquiring a user photo acquired in real time; matching the user photo with a user image in a user multimedia database; responsive to the presence of a user image matching the user photograph, determining a user identification corresponding to the user image matching the user photograph as the first user identification;
the method further comprises the steps of: and in response to the absence of a user image matching the user photo, presenting first prompt information for indicating that the first user takes the photo on site and then generates a review video again.
2. The method according to claim 1, characterized in that: the method further comprises the steps of: in response to detecting an image synthesis operation of a second user on a second user image and a target scene image, extracting a face image in the second user image, and merging the extracted face image into the target scene image to generate a second user scene image; and presenting the second user scene image.
3. The method according to claim 2, characterized in that: the target scene image is obtained by the following steps: presenting each scene image in a preset scene image set; in response to detecting a selection operation by the second user of a scene image of the presented scene images, determining the selected scene image as the target scene image.
4. A method according to claim 2 or 3, characterized in that: the second user image is obtained by: acquiring a camera real-time acquisition image as a second user image; or acquiring an image uploaded by the second user as a second user image.
5. The method according to claim 2, characterized in that: the method further comprises the steps of: and correspondingly storing the second user scene image and the second user identification into the user multimedia database.
6. The method according to claim 1, characterized in that: the method further comprises the steps of: responding to the detection of the video playing request, and presenting a video playing directory interface; and responding to the detection of the playing request for the target video in the video playing directory interface, and acquiring and playing the target video.
7. A multimedia information processing apparatus that performs the multimedia information processing method of claim 1, characterized in that: the method comprises the following steps: an acquisition unit configured to acquire a first user scene image set corresponding to a first user identifier in a user multimedia database in response to detection of a video generation request for the first user identifier; a presentation unit configured to present the first set of user scene images; a generation unit configured to generate a first user scene image review video based on each selected first user scene image in response to detecting a selection operation for at least one first user scene image in the first set of user scene images; and a playing unit configured to play the first user scene image review video.
8. An electronic device, characterized in that: the method comprises the following steps: one or more processors; storage means having stored thereon one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-6.
9. A computer-readable storage medium having stored thereon a computer program, characterized by: the computer program implementing the method according to any of claims 1-6 when executed by one or more processors.
CN202211440398.1A 2022-11-17 2022-11-17 Multimedia information processing method and device, electronic equipment and storage medium Active CN115714877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211440398.1A CN115714877B (en) 2022-11-17 2022-11-17 Multimedia information processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211440398.1A CN115714877B (en) 2022-11-17 2022-11-17 Multimedia information processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115714877A CN115714877A (en) 2023-02-24
CN115714877B true CN115714877B (en) 2023-06-27

Family

ID=85233856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211440398.1A Active CN115714877B (en) 2022-11-17 2022-11-17 Multimedia information processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115714877B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933538A (en) * 2016-06-15 2016-09-07 维沃移动通信有限公司 Video finding method for mobile terminal and mobile terminal
CN108989703A (en) * 2018-06-28 2018-12-11 Oppo广东移动通信有限公司 Recall video creation method and relevant apparatus
CN112287141A (en) * 2020-10-29 2021-01-29 维沃移动通信有限公司 Photo album processing method and device, electronic equipment and storage medium
CN113194268A (en) * 2020-01-14 2021-07-30 北京小米移动软件有限公司 Video generation method, device and medium
CN113347502A (en) * 2021-06-02 2021-09-03 宁波星巡智能科技有限公司 Video review method, video review device, electronic equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7694213B2 (en) * 2004-11-01 2010-04-06 Advanced Telecommunications Research Institute International Video content creating apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933538A (en) * 2016-06-15 2016-09-07 维沃移动通信有限公司 Video finding method for mobile terminal and mobile terminal
CN108989703A (en) * 2018-06-28 2018-12-11 Oppo广东移动通信有限公司 Recall video creation method and relevant apparatus
CN113194268A (en) * 2020-01-14 2021-07-30 北京小米移动软件有限公司 Video generation method, device and medium
CN112287141A (en) * 2020-10-29 2021-01-29 维沃移动通信有限公司 Photo album processing method and device, electronic equipment and storage medium
CN113347502A (en) * 2021-06-02 2021-09-03 宁波星巡智能科技有限公司 Video review method, video review device, electronic equipment and medium

Also Published As

Publication number Publication date
CN115714877A (en) 2023-02-24

Similar Documents

Publication Publication Date Title
CN107430858A (en) The metadata of transmission mark current speaker
US11955125B2 (en) Smart speaker and operation method thereof
US9530067B2 (en) Method and apparatus for storing and retrieving personal contact information
CN109117233A (en) Method and apparatus for handling information
CN106971009B (en) Voice database generation method and device, storage medium and electronic equipment
CN108228906B (en) Method and apparatus for generating information
EP4336846A1 (en) Audio sharing method and apparatus, device, and medium
CN109871834A (en) Information processing method and device
CN109934191A (en) Information processing method and device
US8230344B2 (en) Multimedia presentation creation
CN107832941A (en) Order processing method and device
CN109977839A (en) Information processing method and device
KR20150041592A (en) Method for updating contact information in callee electronic device, and the electronic device
WO2020029673A1 (en) Voice processing method and apparatus, storage medium, and electronic device
CN108924381A (en) Image processing method, image processing apparatus and computer-readable medium
JP2020087105A (en) Information processing method, information processing apparatus and computer program
CN113241070B (en) Hotword recall and update method and device, storage medium and hotword system
US20150079959A1 (en) Smart Microphone
CN110379406A (en) Voice remark conversion method, system, medium and electronic equipment
CN115714877B (en) Multimedia information processing method and device, electronic equipment and storage medium
CN111859970B (en) Method, apparatus, device and medium for processing information
CN107526827A (en) Method, equipment and computer-readable recording medium for question and answer displaying
Huang et al. Kimono: kiosk-mobile phone knowledge sharing system
CN113204701A (en) Data recommendation method and device, terminal and storage medium
CN110275988A (en) Obtain the method and device of picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant