CN112004045A - Video processing method, device and storage medium - Google Patents

Video processing method, device and storage medium Download PDF

Info

Publication number
CN112004045A
CN112004045A CN202010869632.7A CN202010869632A CN112004045A CN 112004045 A CN112004045 A CN 112004045A CN 202010869632 A CN202010869632 A CN 202010869632A CN 112004045 A CN112004045 A CN 112004045A
Authority
CN
China
Prior art keywords
video data
interface
sub
terminal
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010869632.7A
Other languages
Chinese (zh)
Inventor
李江
李刚强
刘博�
钱大友
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oppo Chongqing Intelligent Technology Co Ltd
Original Assignee
Oppo Chongqing Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo Chongqing Intelligent Technology Co Ltd filed Critical Oppo Chongqing Intelligent Technology Co Ltd
Priority to CN202010869632.7A priority Critical patent/CN112004045A/en
Publication of CN112004045A publication Critical patent/CN112004045A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a video processing method, a video processing device and a storage medium, wherein the method comprises the following steps: acquiring first video data in real time; the first video data is obtained based on a shooting function of a first terminal; acquiring second video data; the second video data is obtained based on a shooting function of a second terminal; and obtaining target video data based on the first video data and the second video data.

Description

Video processing method, device and storage medium
Technical Field
The present invention relates to internet technologies, and in particular, to a video processing method, apparatus, and storage medium.
Background
At present, intelligent terminals in the market, such as mobile phone tablet computers and the like, are all configured with cameras to realize shooting and video instant messaging functions. The shooting function is one of the most common functions of the intelligent terminal, and users often like to shoot objects concerned by themselves by using the terminal, and the objects are used as materials to make various favorite content collections of the users.
With the rapid development of the intelligent terminal in recent years, people pay more and more attention to the social function of the intelligent terminal, and more people share and communicate the content collection shot by the terminal. People hope that the content collections are not only acquired and shared by themselves, but also hope to participate in the acquisition and editing of the content together with friends and relatives, so that the purposes of better communication and emotional fusion are achieved.
At present, intelligent terminals on the market can only provide a single photographing function, a single camera shooting function and a single sharing function, and a user is difficult to carry out deeper interaction with relatives and friends through photographing or camera shooting.
Disclosure of Invention
In view of the foregoing, it is a primary object of the present invention to provide a video processing method, apparatus and storage medium.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the embodiment of the invention provides a video processing method, which is applied to a first terminal and comprises the following steps:
acquiring first video data in real time; the first video data is obtained based on a shooting function of a first terminal;
acquiring second video data; the second video data is obtained based on a shooting function of a second terminal;
and obtaining target video data based on the first video data and the second video data.
In the foregoing solution, the acquiring the second video data includes: determining the ID of the second terminal;
and acquiring second video data acquired and sent by the second terminal based on the ID of the second terminal.
In the above scheme, the number of the second video data is at least one; the number of the second terminals is at least one; the method further comprises the following steps:
receiving a first selection instruction; selecting at least one third video data from the at least one second video data according to the first selection instruction;
receiving a second selection instruction; determining the display sequence of each third video data in the at least one third video data according to the second selection instruction;
correspondingly, obtaining the target video data based on the first video data and the second video data includes:
when the first video data is presented, sequentially presenting the at least one third video data according to the display sequence of each third video data;
and obtaining target video data according to the first video data and the at least one third video data presented in sequence.
In the above scheme, the first video data is presented through a first sub-interface; the second video data is presented through a second sub-interface; the number of the second video data is at least two; the number of the second terminals is at least two; the method further comprises the following steps:
receiving a third selection instruction;
selecting at least two second sub-interfaces according to the third selection instruction;
presenting each of the at least two second video data through each of the at least two second sub-interfaces, respectively.
In the above scheme, the first video data is presented through a first sub-interface; the second video data is presented through a second sub-interface; the method further comprises the following steps:
receiving a first adjusting instruction; executing at least one of the following according to the first adjusting instruction:
adjusting the position relation of the first sub-interface and the second sub-interface;
adjusting the interface shape of the first sub-interface;
adjusting the interface shape of the second sub-interface;
correspondingly, the obtaining target video data based on the first video data and the second video data includes:
and generating target video data according to the first video data and the second video data presented in the first sub-interface and the second sub-interface after the position relation and/or the interface shape are/is adjusted.
In the above scheme, the method further comprises:
identifying the first video data to obtain a first identification result;
identifying the second video data to obtain a second identification result;
adding a special effect to the first video data according to the first recognition result; and/or adding a special effect to the second video data according to the second recognition result.
In the above scheme, the first video data is presented through a first sub-interface; the second video data is presented through a second sub-interface; the method further comprises the following steps:
identifying the first video data to obtain a third identification result; the third recognition result includes at least: a first face and a position corresponding to the first face;
identifying the second video data to obtain a fourth identification result; the fourth recognition result includes at least: a second face and a location corresponding to the second face;
adjusting the position relation of the first sub-interface and the second sub-interface according to the position corresponding to the first face and the position corresponding to the second face, so that the first face and the second face meet preset requirements;
wherein the preset requirement comprises at least one of the following:
the first face part and the second face part are positioned on the same horizontal line;
the distance between the first face and the second face is lower than a preset distance.
The embodiment of the invention provides a video processing device, which is applied to a first terminal and comprises:
the first acquisition module is used for acquiring first video data in real time; the first video data is obtained based on a shooting function of a first terminal;
the second acquisition module is used for acquiring second video data; the second video data is obtained based on a shooting function of a second terminal;
and the processing module is used for simultaneously presenting the first video data and the second video data to obtain target video data.
In the above scheme, the second obtaining module is configured to determine an ID of the second terminal;
and acquiring second video data acquired and sent by the second terminal based on the ID of the second terminal.
In the above scheme, the number of the second video data is at least one; the number of the second terminals is at least one; the device further comprises: the first processing module is used for receiving a first selection instruction; selecting at least one third video data from the at least one second video data according to the first selection instruction;
receiving a second selection instruction; determining the display sequence of each third video data in the at least one third video data according to the second selection instruction;
correspondingly, the processing module is configured to, when presenting the first video data, present the at least one third video data in sequence according to the display sequence of each third video data;
and obtaining target video data according to the first video data and the at least one third video data presented in sequence.
In the above scheme, the first video data is presented through a first sub-interface; the second video data is presented through a second sub-interface; the number of the second video data is at least two; the number of the second terminals is at least two; the device further comprises: the second processing module is used for receiving a third selection instruction;
selecting at least two second sub-interfaces according to the third selection instruction;
presenting each of the at least two second video data through each of the at least two second sub-interfaces, respectively.
In the above scheme, the first video data is presented through a first sub-interface; the second video data is presented through a second sub-interface; the device further comprises: the third processing module is used for receiving the first adjusting instruction; executing at least one of the following according to the first adjusting instruction:
adjusting the position relation of the first sub-interface and the second sub-interface;
adjusting the interface shape of the first sub-interface;
adjusting the interface shape of the second sub-interface;
correspondingly, the processing module is configured to generate target video data according to the first video data and the second video data presented in the first sub-interface and the second sub-interface after the position relationship and/or the interface shape are/is adjusted.
In the above scheme, the apparatus further comprises: the fourth processing module is used for identifying the first video data to obtain a first identification result;
identifying the second video data to obtain a second identification result;
adding a special effect to the first video data according to the first recognition result; and/or adding a special effect to the second video data according to the second recognition result.
In the above scheme, the first video data is presented through a first sub-interface; the second video data is presented through a second sub-interface; the device further comprises: the fifth processing module is used for identifying the first video data to obtain a third identification result; the third recognition result includes at least: a first face and a position corresponding to the first face;
identifying the second video data to obtain a fourth identification result; the fourth recognition result includes at least: a second face and a location corresponding to the second face;
adjusting the position relation of the first sub-interface and the second sub-interface according to the position corresponding to the first face and the position corresponding to the second face, so that the first face and the second face meet preset requirements;
wherein the preset requirement comprises at least one of the following:
the first face part and the second face part are positioned on the same horizontal line;
the distance between the first face and the second face is lower than a preset distance.
An embodiment of the present invention provides a video processing apparatus, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the video processing method according to any one of the above items when executing the computer program.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the above video processing methods.
The embodiment of the invention provides a video processing method, a video processing device and a storage medium, wherein the method comprises the following steps: acquiring first video data in real time; the first video data is obtained based on a shooting function of a first terminal; acquiring second video data; the second video data is obtained based on a shooting function of a second terminal; obtaining target video data based on the first video data and the second video data; therefore, the video data shot by the user and the video data shot by other people holding equipment are combined, the interaction of double-person or multi-person video shooting or editing is realized, and the shooting interestingness is improved.
Drawings
Fig. 1 is a schematic flowchart of a video processing method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another video processing method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of rights acquisition according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a multi-scene video processing interface according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating another multi-scene video processing interface according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another video processing apparatus according to an embodiment of the present invention.
Detailed Description
Prior to further detailed description of the present invention with reference to the examples, the related art will be described.
As described above, in the related art, a user can only use a single photographing function, a single camera shooting function, and a single sharing function, and cannot enter deeper interactive communication with friends and relatives through photographing or camera shooting, and the related art cannot meet the requirements of people on the development trend of terminals and the interactive requirements of people for participating in content acquisition and editing together. Therefore, it is desirable to provide a more intelligent and closer-to-user video processing method to meet the requirement of user interaction in shooting.
Based on this, in the embodiment of the present invention, the first video data is obtained in real time; the first video data is obtained based on a shooting function of a first terminal; acquiring second video data; the second video data is obtained based on a shooting function of a second terminal; and obtaining target video data based on the first video data and the second video data.
The present invention will be described in further detail with reference to examples.
Fig. 1 is a schematic flowchart of a video processing method according to an embodiment of the present invention; as shown in fig. 1, the method includes:
step 101, acquiring first video data in real time; the first video data is obtained based on a shooting function of a first terminal;
102, acquiring second video data; the second video data is obtained based on a shooting function of a second terminal;
and 103, obtaining target video data based on the first video data and the second video data.
The video processing method can be applied to a first terminal, wherein the first terminal has a shooting function, namely the first terminal is a terminal with or connected with an image acquisition module, and the image acquisition module can be a camera. For example: the first terminal can be a smart phone, a tablet computer, a notebook computer and the like.
The video processing method can be carried out in the shooting process; that is to say, the acquiring of the first video data in real time refers to acquiring the first video data acquired in real time; the acquiring of the second video data refers to acquiring the second video data acquired by the second terminal in real time.
In some embodiments, the obtaining second video data comprises:
determining an Identity (ID) of the second terminal;
and acquiring second video data acquired and sent by the second terminal based on the ID of the second terminal.
Here, the method may be specifically executed by a certain Application program (APP) in the first terminal, for example, a certain video editing APP is installed in the first terminal, and the APP may obtain the usage right of the image capture module, so as to obtain the first video data in real time through the image capture module.
The second terminal can establish a connection relation with the first terminal, so that the first terminal can acquire video data acquired by the second terminal.
Specifically, the first terminal acquires the first video data in real time, namely the first terminal acquires the first video data in real time through an image acquisition module which is arranged on or connected with the first terminal;
the first terminal acquires the second video data, namely the second terminal acquires the second video data through an image acquisition module which is arranged or connected with the second terminal after the first terminal establishes a connection relation with the second terminal, and the first terminal acquires the second video data from the second terminal in real time.
The acquiring of the first video data by the first terminal in real time and the acquiring of the second video data by the first terminal may be performed simultaneously.
In practical application, it is considered that the first terminal side can acquire the second video data transmitted by the plurality of second terminals, and therefore, the presentation order of the second video data transmitted by the plurality of second terminals can be set.
Based on this, in an embodiment, the number of the second video data is at least one; the number of the second terminals is at least one; the method further comprises the following steps:
receiving a first selection instruction; selecting at least one third video data from the at least one second video data according to the first selection instruction;
receiving a second selection instruction; determining the display sequence of each third video data in the at least one third video data according to the second selection instruction;
correspondingly, generating target video data based on the first video data and the second video data comprises:
when the first video data is presented, sequentially presenting the at least one third video data according to the display sequence of each third video data;
and obtaining target video data according to the first video data and the at least one third video data presented in sequence.
By the method, the video data to be presented can be selected from the second video data corresponding to the plurality of second terminals and recorded as the third video data; here, the number of the third video data may be one or more;
in the case of a plurality of third video data, the display order may be further adjusted, for example: selecting three pieces of third video data, and respectively recording the three pieces of third video data as third video data A, third video data B and third video data C; determining that the presentation order is third video data A, third video data C and third video data B based on the second selection instruction;
and when the first video data is presented, sequentially presenting third video data A, third video data C and third video data B according to a display sequence.
Here, the first terminal has a human-computer interaction interface, and a user performs corresponding operations through the human-computer interaction interface, so that the first terminal receives corresponding instructions (a first selection instruction and a second selection instruction);
the first terminal is further provided with a display interface, the display interface can be divided into a first sub-interface and a second sub-interface when multi-terminal interaction is performed, the first video data is displayed through the first sub-interface, and the at least one third video data is sequentially presented through the second sub-interface (for example, the third video data A, the third video data C and the third video data B are sequentially presented according to the display sequence).
By the method, the contents shot by different users are recorded and switched according to rules, and a satisfactory multi-user video collection (such as a video collection with the same action by different users) can be obtained after shooting is finished, so that the trouble of content editing in the later period is avoided, and the simplicity of a shooting mode can be improved.
In practical application, the first terminal side can acquire the second video data sent by the plurality of second terminals, so that the second video data sent by the plurality of second terminals can be presented at the same time.
Based on this, in one embodiment, the first video data is presented through a first sub-interface; the second video data is presented through a second sub-interface;
the number of the second video data is at least two; the number of the second terminals is at least two;
the method further comprises the following steps:
receiving a third selection instruction;
selecting at least two second sub-interfaces according to the third selection instruction;
presenting each of the at least two second video data through each of the at least two second sub-interfaces, respectively.
Here, a user holding the first terminal may select a plurality of sub-interfaces through the human-computer interaction interface of the first terminal, the first terminal receives a third selection instruction, determines at least two second sub-interfaces, and each of the at least two second sub-interfaces respectively presents one of the at least two second video data.
In actual application, in order to improve the interest and the user experience, the shape and/or the change of each sub-interface can be realized.
Based on this, in one embodiment, the first video data is presented through a first sub-interface; the second video data is presented through a second sub-interface; the first sub-interface and the second sub-interface belong to a display interface of the first terminal;
the method further comprises the following steps:
receiving a first adjusting instruction; executing at least one of the following according to the first adjusting instruction:
adjusting the position relation of the first sub-interface and the second sub-interface;
adjusting the interface shape of the first sub-interface;
adjusting the interface shape of the second sub-interface;
correspondingly, the obtaining target video data based on the first video data and the second video data includes:
and generating target video data according to the first video data and the second video data which are simultaneously presented in the first sub-interface and the second sub-interface after the position relation and/or the interface shape are/is adjusted.
In practical application, in order to improve the interest of the video, a special effect may be added to the corresponding video data (e.g., the first video data and the second video data).
Based on this, in an embodiment, the method further comprises:
identifying the first video data to obtain a first identification result;
identifying the second video data to obtain a second identification result;
adding a special effect to the first video data according to the first recognition result; and/or adding a special effect to the second video data according to the second recognition result.
Here, the corresponding video data (e.g., the first video data, the second video data) may be identified to obtain a corresponding identification result (e.g., a first identification result corresponding to the first video data, a second identification result corresponding to the second video data); and inquiring a preset special effect library according to the corresponding recognition result, determining a special effect corresponding to the corresponding recognition result, and adding the determined special effect to the corresponding video data.
In practical application, in order to better blend the second video data into the first video data, the first video data and the second video data can be identified and blended by adjusting the position.
Based on this, in one embodiment, the first video data is presented through a first sub-interface; the second video data is presented through a second sub-interface; the first sub-interface and the second sub-interface belong to a display interface of a first terminal;
the method further comprises the following steps:
identifying the first video data to obtain a third identification result; the third recognition result includes at least: a first face and a position corresponding to the first face;
identifying the second video data to obtain a fourth identification result; the fourth recognition result includes at least: a second face and a location corresponding to the second face;
adjusting the position relation of the first sub-interface and the second sub-interface according to the position corresponding to the first face and the position corresponding to the second face, so that the first face and the second face meet preset requirements;
wherein the preset requirement comprises at least one of the following:
the first face part and the second face part are positioned on the same horizontal line;
the distance between the first face and the second face is lower than a preset distance.
Specifically, the method provided by the embodiment of the invention can be applied to group photo, for example, when a family or a classmate party, someone (denoted as D certain) does not arrive at the meeting, and the party scene is a main scene, namely, first video data; the second sub-interface (the second video data is presented in the second sub-interface) presenting the image of the image D can be adjusted to be fused into the main scene, and at the moment, the first video data and the second video data presented can be recorded on a screen or photographed to obtain a group photo including the image of the image D.
The number of the first faces may be one or more; the number of the second faces may be one or more;
by identifying the first video data and the second video data, the person included in the first video data and the person included in the second video data can be determined, and the position of the face position of the person can be adjusted, for example, the first face and the second face are in the same horizontal line, and/or the distance between the first face and the second face is lower than a preset distance, so that the second face can be better integrated into a merged scene.
In the above-described processing, the first face and the second face are parallel to each other, and the vertical position difference does not exceed a predetermined threshold.
In an embodiment, the target video data is obtained based on the first video data and the second video data, which may be understood as obtaining the target video data by taking pictures, recording videos, and the like while obtaining the first video data and the second video data.
Correspondingly, an embodiment of the present invention further provides a video processing method applied to a second terminal, where the video processing method of the second terminal includes:
establishing a connection relation with the first terminal;
and collecting second video data and sending the collected second video data to the first terminal.
The connection relation with the first terminal can be established by the first terminal and the second terminal respectively through the video editing APP installed on the first terminal and the server, data transmission between the first terminal and the server is achieved through the server, and then the second video data can be sent to the first terminal and called by the first terminal.
The video editing APP installed or provided by the first terminal and the second terminal can be set to be usable only by logging in an account, so that after a user logs in the video editing APP based on the account provided by the user, the server can acquire corresponding video data and store the corresponding video data corresponding to the account used by the user; other video data may also be available as needed based on the account number.
For example, for the first terminal, the server acquires first video data and sends the first video data to the second terminal; for a second terminal, the server acquires second video data and sends the second video data to the first terminal;
the two video data can acquire the video data of the two video data and the video data of the other party, and the target video data is obtained based on the two video data; double interaction is realized;
certainly, when the number of the second terminals is multiple, multi-person interaction is realized.
Fig. 2 is a schematic flow chart of another video processing method according to an embodiment of the present invention; as shown in fig. 2, the video processing method is applied to a first terminal having a camera module and a video editing APP; the method comprises the following steps:
step 201, calling a camera module right item;
specifically, a camera is called by a video editing APP installed or carried by a first terminal, a camera permission calling request can be popped up during calling, and a user agrees to call the camera permission request, namely, the camera can be used.
Fig. 3 is a schematic diagram of rights acquisition according to an embodiment of the present invention; as shown in fig. 3, the user clicks the camera, and the first terminal prompts the right acquisition reminder after receiving the corresponding request; and the user clicks permission or rejection again, the first terminal receives the request result, and whether the camera can be used is determined based on the request result.
Step 202, acquiring scene picture video information through a camera module, and uploading the acquired scene picture video information to a video editing APP background;
specifically, after the video editing APP calls the camera module right item, the camera module collects scene picture video information (which is equivalent to the first video data, that is, an image is collected by the camera module), and uploads the collected scene picture video information to the video editing APP background;
step 203, editing APP background search resources by video, and allocating the searched resources;
the video editing APP can be an APP needing to be logged in, that is, a user needs to log in the video editing APP based on an account number held by the user; therefore, the scene picture video information which is instantly shot by calling the camera module can be uploaded to a network, and is correspondingly stored with the user account bound with the video editing APP for resource allocation and use of the background of the video editing APP.
For the video editing APP of the first terminal, scene picture video information shot and uploaded by other terminals can be searched (specifically, the scene picture video information shot by the first terminal can be searched based on account numbers of other terminals), and multi-terminal shooting interaction is realized by combining the scene picture video information shot by the first terminal.
Step 204, the video editing APP responds to the scene switching request and performs resource allocation;
in the application process, a user can switch scenes, and particularly performs corresponding operation through a human-computer interaction interface, so that the video editing APP responds to a scene switching request;
the scene switching request comprises at least one of the following:
a first selection instruction for selecting one or more from a plurality of scenes;
a second selection instruction for selecting a presentation order of a plurality of scenes;
and a third selection instruction, configured to select at least two display sub-interfaces (corresponding to the second sub-interface), where each display sub-interface may be used to present a scene.
The above scene can be understood as scene picture video information shot by other terminals.
Step 205, switching the video editing APP to a corresponding scene;
here, the video editing APP provides an interface for a user to make a user scene selection; responding to the scene switching request, and realizing switching of different scenes.
For example, the multi-person interaction may select a multi-user scene for multi-scene video collection (e.g., in response to the first selection instruction, the second selection instruction, and the third selection instruction); of course, double-person interaction and multi-person interaction can also mutually select a single user scene.
Step 206, acquiring a target image or a target video.
And generating a target picture or a target video based on the scene picture video information of single-person interaction, double-person interaction or multi-person interaction.
According to the method provided by the embodiment of the invention, the function is added in the video editing APP, after the camera authority is called, the user is limited through the video editing APP, and the content shot by the camera is uploaded to the server and stored corresponding to the user account in real time. When a user needs to carry out shooting interaction, the instant scene of the camera of the other side can be scheduled through the account (under the condition that the user permission allows), and at the moment, interaction such as picture shooting, video recording and the like can be carried out. Therefore, the shooting content which is participated in by the group is obtained and edited, the shooting operation is simple, and the purposes of communication and emotion fusion are achieved better. Meanwhile, through selection of different scenes, the interestingness of shooting is increased, and the viscosity of a user is enhanced.
The method provided by the embodiment of the invention is not only suitable for double-person interactive shooting, but also suitable for group interactive shooting. For example, in the instant scene of a certain user, contents shot by different users are edited into a set, so that the interestingness of the shooting mode is improved.
According to the method provided by the embodiment of the invention, the contents shot by different users are recorded, the user scenes can be switched according to the time rule, and a satisfactory multi-user video collection (such as a video collection with the same action of different users) can be obtained after shooting is finished, so that the trouble of content editing in the later period is avoided; this can improve the ease of the shooting mode.
Fig. 4 is a schematic diagram of a multi-scene video processing interface according to an embodiment of the present invention; as shown in fig. 4, the display interface of the first terminal has a first sub-interface and a second sub-interface; and sequentially displaying video data corresponding to scenes 1 to 9 in the second sub-interface. Therefore, video interaction with other nine terminals can be realized. Moreover, as mentioned above, a satisfactory multi-user video collection can be obtained after shooting is finished, and the trouble of content editing in the later period is avoided; this can improve the ease of the shooting mode.
FIG. 5 is a diagram illustrating another multi-scene video processing interface according to an embodiment of the present invention; as shown in fig. 5, the first terminal has two second sub-interfaces within the display interface; and video data corresponding to the scene 1 and the scene 2 are respectively displayed in the two second sub-interfaces. Thus, video interaction with a plurality of terminals can be realized.
Fig. 6 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present invention; as shown in fig. 6, the video processing apparatus is applied to a first terminal, and the apparatus includes:
the first acquisition module is used for acquiring first video data in real time; the first video data is obtained based on a shooting function of a first terminal;
the second acquisition module is used for acquiring second video data; the second video data is obtained based on a shooting function of a second terminal;
and the processing module is used for simultaneously presenting the first video data and the second video data to obtain target video data.
Specifically, the second obtaining module is configured to determine an ID of the second terminal;
and acquiring second video data acquired and sent by the second terminal based on the ID of the second terminal.
Specifically, the number of the second video data is at least one; the number of the second terminals is at least one; the device further comprises: the first processing module is used for receiving a first selection instruction; selecting at least one third video data from the at least one second video data according to the first selection instruction;
receiving a second selection instruction; determining the display sequence of each third video data in the at least one third video data according to the second selection instruction;
correspondingly, the processing module is configured to, when presenting the first video data, present the at least one third video data in sequence according to the display sequence of each third video data;
and obtaining target video data according to the first video data and the at least one third video data presented in sequence.
Specifically, the first video data is presented through a first sub-interface; the second video data is presented through a second sub-interface; the number of the second video data is at least two; the number of the second terminals is at least two; the device further comprises: the second processing module is used for receiving a third selection instruction;
selecting at least two second sub-interfaces according to the third selection instruction;
presenting each of the at least two second video data through each of the at least two second sub-interfaces, respectively.
Specifically, the first video data is presented through a first sub-interface; the second video data is presented through a second sub-interface; the device further comprises: the third processing module is used for receiving the first adjusting instruction; executing at least one of the following according to the first adjusting instruction:
adjusting the position relation of the first sub-interface and the second sub-interface;
adjusting the interface shape of the first sub-interface;
adjusting the interface shape of the second sub-interface;
correspondingly, the processing module is configured to generate target video data according to the first video data and the second video data presented in the first sub-interface and the second sub-interface after the position relationship and/or the interface shape are/is adjusted.
Specifically, the apparatus further comprises: the fourth processing module is used for identifying the first video data to obtain a first identification result;
identifying the second video data to obtain a second identification result;
adding a special effect to the first video data according to the first recognition result; and/or adding a special effect to the second video data according to the second recognition result.
Specifically, the first video data is presented through a first sub-interface; the second video data is presented through a second sub-interface; the device further comprises: the fifth processing module is used for identifying the first video data to obtain a third identification result; the third recognition result includes at least: a first face and a position corresponding to the first face;
identifying the second video data to obtain a fourth identification result; the fourth recognition result includes at least: a second face and a location corresponding to the second face;
adjusting the position relation of the first sub-interface and the second sub-interface according to the position corresponding to the first face and the position corresponding to the second face, so that the first face and the second face meet preset requirements;
wherein the preset requirement comprises at least one of the following:
the first face part and the second face part are positioned on the same horizontal line;
the distance between the first face and the second face is lower than a preset distance.
It should be noted that: in the above-described embodiment, when the video processing apparatus implements the corresponding video processing method, only the division of the program modules is taken as an example, and in practical applications, the processing distribution may be completed by different program modules according to needs, that is, the internal structure of the first terminal is divided into different program modules to complete all or part of the processing described above. In addition, the apparatus provided by the above embodiment and the embodiment of the corresponding method belong to the same concept, and the specific implementation process thereof is described in the method embodiment, which is not described herein again.
Fig. 7 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present invention; as shown in fig. 7, the apparatus 70 includes: a processor 701 and a memory 702 for storing a computer program operable on the processor; the processor 701 is configured to, when running the computer program, perform: acquiring first video data in real time; the first video data is obtained based on a shooting function of a first terminal; acquiring second video data; the second video data is obtained based on a shooting function of a second terminal; and obtaining target video data based on the first video data and the second video data.
In an embodiment, the processor 701 is further configured to, when running the computer program, perform: determining the ID of the second terminal; and acquiring second video data acquired and sent by the second terminal based on the ID of the second terminal.
In an embodiment, the processor 701 is further configured to, when running the computer program, perform: receiving a first selection instruction; selecting at least one third video data from the at least one second video data according to the first selection instruction;
receiving a second selection instruction; determining the display sequence of each third video data in the at least one third video data according to the second selection instruction;
correspondingly, the following steps are also executed: when the first video data is presented, sequentially presenting the at least one third video data according to the display sequence of each third video data;
and obtaining target video data according to the first video data and the at least one third video data presented in sequence.
In an embodiment, the processor 701 is further configured to, when running the computer program, perform: receiving a third selection instruction; selecting at least two second sub-interfaces according to the third selection instruction; presenting each of the at least two second video data through each of the at least two second sub-interfaces, respectively.
In an embodiment, the processor 701 is further configured to, when running the computer program, perform: receiving a first adjusting instruction; executing at least one of the following according to the first adjusting instruction:
adjusting the position relation of the first sub-interface and the second sub-interface;
adjusting the interface shape of the first sub-interface;
adjusting the interface shape of the second sub-interface;
correspondingly, the following steps are also executed:
and generating target video data according to the first video data and the second video data presented in the first sub-interface and the second sub-interface after the position relation and/or the interface shape are/is adjusted.
In an embodiment, the processor 701 is further configured to, when running the computer program, perform: identifying the first video data to obtain a first identification result;
identifying the second video data to obtain a second identification result;
adding a special effect to the first video data according to the first recognition result; and/or adding a special effect to the second video data according to the second recognition result.
In an embodiment, the processor 701 is further configured to, when running the computer program, perform: identifying the first video data to obtain a third identification result; the third recognition result includes at least: a first face and a position corresponding to the first face;
identifying the second video data to obtain a fourth identification result; the fourth recognition result includes at least: a second face and a location corresponding to the second face;
adjusting the position relation of the first sub-interface and the second sub-interface according to the position corresponding to the first face and the position corresponding to the second face, so that the first face and the second face meet preset requirements;
wherein the preset requirement comprises at least one of the following:
the first face part and the second face part are positioned on the same horizontal line;
the distance between the first face and the second face is lower than a preset distance.
When the processor runs the computer program, the corresponding process implemented by the first terminal in the methods according to the embodiments of the present invention is implemented, and for brevity, no further description is given here.
In practical applications, the apparatus 70 may further include: at least one network interface 703. The various components in the video processing device 70 are coupled together by a bus system 704. It is understood that the bus system 704 is used to enable communications among the components. The bus system 704 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled in fig. 7 as the bus system 704. The number of the processors 701 may be at least one. The network interface 703 is used for communication between the video processing apparatus 70 and other devices in a wired or wireless manner.
The memory 702 in embodiments of the present invention is used to store various types of data to support the operation of the video processing device 70.
The method disclosed in the above embodiments of the present invention may be applied to the processor 701, or implemented by the processor 701. The processor 701 may be an integrated circuit chip that stores signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 701. The Processor 701 may be a general purpose Processor, a DiGital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 701 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 702, and the processor 701 may read the information in the memory 702 and perform the steps of the aforementioned methods in conjunction with its hardware.
In an exemplary embodiment, the video processing Device 70 may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, Micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the foregoing methods.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored; the computer program, when executed by a processor, performs: acquiring first video data in real time; the first video data is obtained based on a shooting function of a first terminal; acquiring second video data; the second video data is obtained based on a shooting function of a second terminal; and obtaining target video data based on the first video data and the second video data.
In one embodiment, the computer program, when executed by the processor, performs: determining the ID of the second terminal; and acquiring second video data acquired and sent by the second terminal based on the ID of the second terminal.
In one embodiment, the computer program, when executed by the processor, performs: receiving a first selection instruction; selecting at least one third video data from the at least one second video data according to the first selection instruction;
receiving a second selection instruction; determining the display sequence of each third video data in the at least one third video data according to the second selection instruction;
correspondingly, the following steps are also executed: when the first video data is presented, sequentially presenting the at least one third video data according to the display sequence of each third video data;
and obtaining target video data according to the first video data and the at least one third video data presented in sequence.
In one embodiment, the computer program, when executed by the processor, performs: receiving a third selection instruction; selecting at least two second sub-interfaces according to the third selection instruction; presenting each of the at least two second video data through each of the at least two second sub-interfaces, respectively.
In one embodiment, the computer program, when executed by the processor, performs: receiving a first adjusting instruction; executing at least one of the following according to the first adjusting instruction:
adjusting the position relation of the first sub-interface and the second sub-interface;
adjusting the interface shape of the first sub-interface;
adjusting the interface shape of the second sub-interface;
correspondingly, the following steps are also executed:
and generating target video data according to the first video data and the second video data presented in the first sub-interface and the second sub-interface after the position relation and/or the interface shape are/is adjusted.
In one embodiment, the computer program, when executed by the processor, performs: identifying the first video data to obtain a first identification result;
identifying the second video data to obtain a second identification result;
adding a special effect to the first video data according to the first recognition result; and/or adding a special effect to the second video data according to the second recognition result.
In one embodiment, the computer program, when executed by the processor, performs: identifying the first video data to obtain a third identification result; the third recognition result includes at least: a first face and a position corresponding to the first face;
identifying the second video data to obtain a fourth identification result; the fourth recognition result includes at least: a second face and a location corresponding to the second face;
adjusting the position relation of the first sub-interface and the second sub-interface according to the position corresponding to the first face and the position corresponding to the second face, so that the first face and the second face meet preset requirements;
wherein the preset requirement comprises at least one of the following:
the first face part and the second face part are positioned on the same horizontal line;
the distance between the first face and the second face is lower than a preset distance.
When the computer program is executed by the processor, the corresponding process implemented by the first terminal in the methods according to the embodiments of the present invention is implemented, and for brevity, no further description is given here.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A video processing method applied to a first terminal, the method comprising:
acquiring first video data in real time; the first video data is obtained based on a shooting function of a first terminal;
acquiring second video data; the second video data is obtained based on a shooting function of a second terminal;
and obtaining target video data based on the first video data and the second video data.
2. The method of claim 1, wherein the obtaining second video data comprises: determining an identity ID of the second terminal;
and acquiring second video data acquired and sent by the second terminal based on the ID of the second terminal.
3. The method of claim 1, wherein the amount of the second video data is at least one; the number of the second terminals is at least one; the method further comprises the following steps:
receiving a first selection instruction; selecting at least one third video data from the at least one second video data according to the first selection instruction;
receiving a second selection instruction; determining the display sequence of each third video data in the at least one third video data according to the second selection instruction;
correspondingly, obtaining the target video data based on the first video data and the second video data includes:
when the first video data is presented, sequentially presenting the at least one third video data according to the display sequence of each third video data;
and obtaining target video data according to the first video data and the at least one third video data presented in sequence.
4. The method of claim 1, wherein the first video data is presented via a first sub-interface; the second video data is presented through a second sub-interface; the number of the second video data is at least two; the number of the second terminals is at least two; the method further comprises the following steps:
receiving a third selection instruction;
selecting at least two second sub-interfaces according to the third selection instruction;
presenting each of the at least two second video data through each of the at least two second sub-interfaces, respectively.
5. The method of claim 1, wherein the first video data is presented via a first sub-interface; the second video data is presented through a second sub-interface; the method further comprises the following steps:
receiving a first adjusting instruction; executing at least one of the following according to the first adjusting instruction:
adjusting the position relation of the first sub-interface and the second sub-interface;
adjusting the interface shape of the first sub-interface;
adjusting the interface shape of the second sub-interface;
correspondingly, the obtaining target video data based on the first video data and the second video data includes:
and generating target video data according to the first video data and the second video data presented in the first sub-interface and the second sub-interface after the position relation and/or the interface shape are/is adjusted.
6. The method of claim 1, further comprising:
identifying the first video data to obtain a first identification result;
identifying the second video data to obtain a second identification result;
adding a special effect to the first video data according to the first recognition result; and/or adding a special effect to the second video data according to the second recognition result.
7. The method of claim 1, wherein the first video data is presented via a first sub-interface; the second video data is presented through a second sub-interface; the method further comprises the following steps:
identifying the first video data to obtain a third identification result; the third recognition result includes at least: a first face and a position corresponding to the first face;
identifying the second video data to obtain a fourth identification result; the fourth recognition result includes at least: a second face and a location corresponding to the second face;
adjusting the position relation of the first sub-interface and the second sub-interface according to the position corresponding to the first face and the position corresponding to the second face, so that the first face and the second face meet preset requirements;
wherein the preset requirement comprises at least one of the following:
the first face part and the second face part are positioned on the same horizontal line;
the distance between the first face and the second face is lower than a preset distance.
8. A video processing apparatus, applied to a first terminal, the apparatus comprising:
the first acquisition module is used for acquiring first video data in real time; the first video data is obtained based on a shooting function of a first terminal;
the second acquisition module is used for acquiring second video data; the second video data is obtained based on a shooting function of a second terminal;
and the processing module is used for obtaining target video data based on the first video data and the second video data.
9. A video processing apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any one of claims 1 to 7 are implemented when the program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202010869632.7A 2020-08-26 2020-08-26 Video processing method, device and storage medium Pending CN112004045A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010869632.7A CN112004045A (en) 2020-08-26 2020-08-26 Video processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010869632.7A CN112004045A (en) 2020-08-26 2020-08-26 Video processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN112004045A true CN112004045A (en) 2020-11-27

Family

ID=73470914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010869632.7A Pending CN112004045A (en) 2020-08-26 2020-08-26 Video processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN112004045A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680480A (en) * 2013-11-28 2015-06-03 腾讯科技(上海)有限公司 Image processing method and device
CN108900790A (en) * 2018-06-26 2018-11-27 努比亚技术有限公司 Method of video image processing, mobile terminal and computer readable storage medium
CN108989692A (en) * 2018-10-19 2018-12-11 北京微播视界科技有限公司 Video capture method, apparatus, electronic equipment and computer readable storage medium
CN109089059A (en) * 2018-10-19 2018-12-25 北京微播视界科技有限公司 Method, apparatus, electronic equipment and the computer storage medium that video generates
CN109862412A (en) * 2019-03-14 2019-06-07 广州酷狗计算机科技有限公司 It is in step with the method, apparatus and storage medium of video
CN110336968A (en) * 2019-07-17 2019-10-15 广州酷狗计算机科技有限公司 Video recording method, device, terminal device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680480A (en) * 2013-11-28 2015-06-03 腾讯科技(上海)有限公司 Image processing method and device
CN108900790A (en) * 2018-06-26 2018-11-27 努比亚技术有限公司 Method of video image processing, mobile terminal and computer readable storage medium
CN108989692A (en) * 2018-10-19 2018-12-11 北京微播视界科技有限公司 Video capture method, apparatus, electronic equipment and computer readable storage medium
CN109089059A (en) * 2018-10-19 2018-12-25 北京微播视界科技有限公司 Method, apparatus, electronic equipment and the computer storage medium that video generates
CN109862412A (en) * 2019-03-14 2019-06-07 广州酷狗计算机科技有限公司 It is in step with the method, apparatus and storage medium of video
CN110336968A (en) * 2019-07-17 2019-10-15 广州酷狗计算机科技有限公司 Video recording method, device, terminal device and storage medium

Similar Documents

Publication Publication Date Title
EP3125154B1 (en) Photo sharing method and device
CN106375193A (en) Remote group photographing method
CN105069075A (en) Photo sharing method and device
CN110536075B (en) Video generation method and device
CN111626807A (en) Commodity object information processing method and device and electronic equipment
CN112153400A (en) Live broadcast interaction method and device, electronic equipment and storage medium
CN110737844B (en) Data recommendation method and device, terminal equipment and storage medium
CN105005599A (en) Photograph sharing method and mobile terminal
CN113163230A (en) Video message generation method and device, electronic equipment and storage medium
WO2021098151A1 (en) Special effect video synthesis method and apparatus, computer device, and storage medium
CN112752153A (en) Video playing processing method, intelligent device and storage medium
CN110990601A (en) Image processing method and device
CN108027821B (en) Method and device for processing picture
CN105657121B (en) The method and device of contact information is set
CN114095290B (en) Information processing method, information processing device and electronic equipment
WO2022247766A1 (en) Image processing method and apparatus, and electronic device
CN112004045A (en) Video processing method, device and storage medium
CN114143429B (en) Image shooting method, device, electronic equipment and computer readable storage medium
US20210377454A1 (en) Capturing method and device
TWI767288B (en) Group-sharing image-capturing method
JP2004056405A (en) Reception supporting system, reception supporting method, reception supporting program, and recording medium with the program recorded thereon
CN114726816B (en) Method and device for establishing association relationship, electronic equipment and storage medium
CN114330403B (en) Graphic code processing method, device, equipment and medium
US20220122362A1 (en) Image processing method, apparatus and device, and storage medium
CN117749849A (en) Intelligent terminal control method based on deep learning and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201127