CN106101824B - Information processing method, electronic equipment and server - Google Patents

Information processing method, electronic equipment and server Download PDF

Info

Publication number
CN106101824B
CN106101824B CN201610513223.7A CN201610513223A CN106101824B CN 106101824 B CN106101824 B CN 106101824B CN 201610513223 A CN201610513223 A CN 201610513223A CN 106101824 B CN106101824 B CN 106101824B
Authority
CN
China
Prior art keywords
video stream
images
user
frames
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610513223.7A
Other languages
Chinese (zh)
Other versions
CN106101824A (en
Inventor
张阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201610513223.7A priority Critical patent/CN106101824B/en
Publication of CN106101824A publication Critical patent/CN106101824A/en
Application granted granted Critical
Publication of CN106101824B publication Critical patent/CN106101824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention discloses an information processing method, which comprises the following steps: acquiring a trigger instruction; starting a first camera and a second camera based on the trigger instruction; obtaining a first video stream through a first camera and obtaining a second video stream through a second camera; transmitting the first video stream and the second video stream to a server in real time, wherein the first video stream and the second video stream are used as processing objects of the server; and obtaining a processing result aiming at the first video stream and the second video stream, wherein the processing result is a processing result sent by the server. The information processing method disclosed by the invention can enrich the functions of the electronic equipment and the server.

Description

Information processing method, electronic equipment and server
Technical Field
The invention belongs to the technical field of communication, and particularly relates to an information processing method, electronic equipment and a server.
Background
At present, a user can take a photo by using electronic equipment, upload the photo to a server, and perform subsequent processing by the server based on the photo uploaded by the user, such as processing the photo and performing user identity authentication based on the photo. However, the functions of the electronic device and the server are still relatively single, and how to further enrich the functions of the electronic device and the server is a problem to be considered by those skilled in the art.
Disclosure of Invention
In view of the above, the present invention provides an information processing method applied to an electronic device and a server to enrich functions of the electronic device and the server.
In order to achieve the purpose, the invention provides the following technical scheme:
in a first aspect, the present invention provides an information processing method, including:
the method comprises the steps of obtaining a trigger instruction, wherein the trigger instruction is used for indicating to start a first camera and a second camera of the electronic equipment, and the first camera and the second camera are arranged on different side surfaces of the electronic equipment; starting the first camera and the second camera based on the trigger instruction; obtaining a first video stream through the first camera and a second video stream through the second camera; transmitting the first video stream and the second video stream to a server in real time, wherein the first video stream and the second video stream are used as processing objects of the server; and obtaining a processing result aiming at the first video stream and the second video stream, wherein the processing result is a processing result sent by the server.
Preferably, in the information processing method, the obtaining a first video stream by the first camera and obtaining a second video stream by the second camera includes: and acquiring images in real time by using the first camera to form the first video stream, and acquiring images in real time by using the second camera to form the second video stream, wherein the first video stream comprises images of users, and the second video stream comprises images of scenes where the users are located.
Preferably, in the information processing method, the processing result is a processing result generated by the server processing based on M frames of the first image and N frames of the second image; the M frames of first images are M frames of first images including a user extracted from the first video stream, the N frames of second images are N frames of second images correspondingly extracted from the second video stream based on the M frames of first images including the user, and M and N are integers greater than or equal to 1.
Preferably, in the information processing method, the obtaining of the trigger instruction includes: and when the electronic equipment calls an identity authentication module, acquiring a trigger instruction generated by the identity verification module.
In a second aspect, the present invention provides an information processing method, including:
receiving a first video stream and a second video stream sent by an electronic device, wherein the first video stream is generated by a first camera of the electronic device, the second video stream is generated by a second video stream of the electronic device, and the first camera and the second camera of the electronic device are arranged on different sides; processing the first video stream and the second video stream to generate a processing result; and sending the processing result to the electronic equipment.
Preferably, in the information processing method, a first camera and a second camera of the electronic device perform image acquisition simultaneously to form the first video stream and the second video stream, where the first video stream includes an image of a user, and the second video stream includes an image of a scene where the user is located;
the processing the first video stream and the second video stream to generate a processing result includes: extracting M frames of a first image including a user in the first video stream; correspondingly extracting N frames of second images in the second video stream based on the M frames of first images of the user; and processing the M frames of first images and the N frames of second images to generate processing results, wherein M and N are integers greater than or equal to 1.
Preferably, in the information processing method, the performing the processing based on the M-frame first image and the N-frame second image includes:
obtaining biometric information of the user based on the M frames of first images;
judging whether the biological characteristic information meets a preset condition or not;
and processing the N frames of second images to generate a processing result when the biological characteristic information meets the preset condition.
Preferably, in the information processing method, the processing the N frames of second images to generate a processing result when the biometric information satisfies the predetermined condition includes:
when the biological characteristic information meets a preset condition for representing the attention of a user, obtaining object information of an object contained in the N frames of second images through image analysis;
obtaining data content associated with the object information as a processing result.
Preferably, in the information processing method, the processing the N frames of second images to generate a processing result when the biometric information satisfies the predetermined condition includes:
when the biological characteristic information meets a preset condition for representing the identity of a user, obtaining environmental information in the N frames of second images through image analysis;
determining whether the environment information belongs to a plurality of safety environments corresponding to the user identities;
when the environment information belongs to a plurality of safety environments corresponding to the user identities, determining that the identity verification based on the biological characteristic information is successful, and taking a verification success result as a processing result;
and when the environment information does not belong to a plurality of safety environments corresponding to the user identities, determining that the identity authentication based on the biological characteristic information fails, and taking the authentication failure result as a processing result.
In a third aspect, the present invention provides an electronic device, including a first camera, a second camera, a communication module, a processor, and a memory;
the communication module is used for receiving and transmitting data;
the memory is used for storing data required by the processor to run and data generated by the processor in the running process;
the first camera and the second camera are arranged on different side surfaces of the electronic equipment;
the processor obtains a trigger instruction for instructing to start the first camera and the second camera, based on the trigger instruction, the first camera and the second camera are started, a first video stream is obtained through the first camera and a second video stream is obtained through the second camera, the communication module is controlled to transmit the first video stream and the second video stream to a server in real time, and processing results of the server for the first video stream and the second video stream are obtained through the communication module.
Preferably, in the electronic device, the processor is specifically configured to, in terms of obtaining a first video stream by the first camera and obtaining a second video stream by the second camera:
and acquiring images in real time by using the first camera to form the first video stream, and acquiring images in real time by using the second camera to form the second video stream, wherein the first video stream comprises images of users, and the second video stream comprises images of scenes where the users are located.
Preferably, in the electronic device, the trigger instruction obtained by the processor is generated when an authentication module in the electronic device is called.
In a fourth aspect, the present invention provides a server comprising a communication module, a processor, and a memory;
the communication module is used for receiving and transmitting data;
the memory is used for storing data required by the processor to run and data generated by the processor in the running process;
the processor receives a first video stream and a second video stream sent by the electronic equipment through the communication module, processes the first video stream and the second video stream to generate a processing result, and sends the processing result to the electronic equipment through the communication module;
the first video stream is generated by a first camera of the electronic equipment, the second video stream is generated by a second video stream of the electronic equipment, and the first camera and the second camera of the electronic equipment are arranged on different sides.
Preferably, a first camera and a second camera of the electronic device perform image acquisition simultaneously to form the first video stream and the second video stream, where the first video stream includes an image of a user, and the second video stream includes an image of a scene where the user is located;
in the server, in an aspect that the processor processes the first video stream and the second video stream to generate a processing result, the processor is specifically configured to: the processor extracts M frames of first images including a user from the first video stream, correspondingly extracts N frames of second images from the second video stream based on the M frames of first images of the user, and processes the M frames of first images and the N frames of second images to generate processing results, wherein M and N are integers greater than or equal to 1.
Preferably, in the server, in terms of performing processing based on the M-frame first image and the N-frame second image, the server is specifically configured to: the processor obtains the biological feature information of the user based on the M frames of first images, judges whether the biological feature information meets a preset condition, and processes the N frames of second images to generate a processing result when the biological feature information meets the preset condition.
Preferably, in the server, in processing the N frames of second images, the server is specifically configured to: and when the biological characteristic information meets a preset condition for representing the attention of the user, the processor obtains object information of an object contained in the N frames of second images through image analysis, and obtains data content associated with the object information as a processing result.
Preferably, in the server, in processing the N frames of second images, the server is specifically configured to: the processor obtains environment information in the N frames of second images through image analysis when the biological characteristic information meets a preset condition for representing the identity of a user, determines whether the environment information belongs to a plurality of safety environments corresponding to the identity of the user, determines that identity authentication based on the biological characteristic information is successful when the environment information belongs to the plurality of safety environments corresponding to the identity of the user, takes a successful authentication result as a processing result, determines that identity authentication based on the biological characteristic information is failed when the environment information does not belong to the plurality of safety environments corresponding to the identity of the user, and takes a failed authentication result as a processing result.
Therefore, the beneficial effects of the invention are as follows:
according to the information processing method disclosed by the invention, the first camera and the second camera of the electronic equipment are started after the trigger instruction is obtained, and the first video stream generated by the first camera and the second video stream generated by the second camera are transmitted to the server in real time, so that the server can process the first video stream and the second video stream to obtain the processing result and receive the processing result sent by the server. Based on the information processing method disclosed by the invention, the electronic equipment respectively generates the video streams by utilizing the two cameras and transmits the two video streams to the server in real time, so that the server can process based on more data, more accurate and rich processing results are generated, and the functions of the electronic equipment and the server are enriched.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flowchart of a disclosed information processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart of an information processing method according to a second embodiment of the present invention;
FIG. 3 is a flowchart of an information processing method according to a third embodiment of the present invention;
FIG. 4 is a flowchart of an information processing method according to a fourth embodiment of the present invention;
FIG. 5 is a flowchart of an information processing method according to a fifth embodiment of the present invention;
FIG. 6 is a flowchart of an information processing method according to a sixth embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device disclosed in a seventh embodiment of the present invention;
fig. 8 is a schematic structural diagram of a server according to an eighth disclosure of the present invention.
Detailed Description
The invention discloses an information processing method applied to electronic equipment and a server, which is used for enriching the functions of the electronic equipment and the server.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, fig. 1 is a flowchart of an information processing method according to an embodiment of the present invention. The information processing method is applied to the electronic equipment, the electronic equipment comprises a first camera and a second camera, and the first camera and the second camera are located on different sides. The information processing method includes:
step S11: a trigger instruction is obtained.
The trigger instruction is used for indicating to start a first camera and a second camera of the electronic equipment.
The triggering instruction can be input by a user and can also be generated by the following modes: when the electronic equipment calls the identity authentication module, the identity authentication module generates a trigger instruction. It should be noted that the identity authentication module may be called by a system of the electronic device, or may be called by an application in the electronic device, such as a payment-type application, and an unlock application.
Step S12: and starting the first camera and the second camera based on the trigger instruction.
Step S13: a first video stream is obtained by a first camera and a second video stream is obtained by a second camera.
Step S14: and transmitting the first video stream and the second video stream to a server in real time.
The first video stream and the second video stream are used as processing objects of the server.
Step S15: and obtaining a processing result aiming at the first video stream and the second video stream, wherein the processing result is a processing result sent by the server.
According to the information processing method disclosed by the embodiment of the invention, after the trigger instruction is obtained, the first camera and the second camera of the electronic equipment are started, and the first video stream generated by the first camera and the second video stream generated by the second camera are transmitted to the server in real time, so that the server can process the first video stream and the second video stream to obtain the processing result and receive the processing result sent by the server. Based on the information processing method disclosed by the embodiment of the invention, the electronic equipment respectively generates the video streams by utilizing the two cameras and transmits the two video streams to the server in real time, so that the server can process based on more data, more accurate and rich processing results are generated, and the functions of the electronic equipment and the server are enriched.
Example two
Referring to fig. 2, fig. 2 is a flowchart of an information processing method disclosed in the second embodiment of the present invention. The information processing method is applied to the electronic equipment, the electronic equipment comprises a first camera and a second camera, and the first camera and the second camera are located on different sides. The information processing method includes:
step S21: a trigger instruction is obtained.
The trigger instruction is used for indicating to start a first camera and a second camera of the electronic equipment.
Step S22: and starting the first camera and the second camera based on the trigger instruction.
Step S23: and simultaneously, acquiring images in real time by using a second camera to form a second video stream comprising images of a scene where the user is located.
In the process of image acquisition by a first camera and a second camera in the electronic equipment, the first camera is opposite to a user and can acquire images of the user, and a first video stream formed by the first camera comprises the images of the user. Because the first camera and the second camera are located on different sides of the electronic device, in the process of acquiring the user image by the first camera, the second camera can acquire the image of the scene where the user is located, and the second video stream formed by the second camera comprises the image of the scene where the user is located.
Step S24: and transmitting the first video stream and the second video stream to a server in real time.
The first video stream and the second video stream are used as processing objects of the server.
Step S25: and obtaining a processing result aiming at the first video stream and the second video stream, wherein the processing result is a processing result sent by the server.
The electronic equipment transmits a first video stream and a second video stream to the server in real time, wherein the first video stream comprises images of a user, and the second video stream comprises images of a scene where the user is located. The server can perform more diverse operations with the first video stream and the second video stream, such as: a video having a 3D effect is generated using the first video stream and the second video stream, in which video the user is located at a central location.
In the information processing method disclosed by the second embodiment of the invention, the first camera and the second camera of the electronic device are started after the trigger instruction is obtained, the first camera generates a first video stream including the user image, the second camera generates a second video stream including the scene image of the user, and the first video stream and the second video stream are transmitted to the server in real time, so that the server processes the first video stream and the second video stream to obtain a processing result and receives the processing result sent by the server. Based on the information processing method disclosed by the second embodiment of the invention, the first video stream generated by the first camera comprises the image of the user, and the second video stream generated by the second camera comprises the image of the scene where the user is located, so that the server can process and generate a processing result based on the image of the user in the first video stream and the image of the scene where the user is located in the second video stream, and the functions of the electronic equipment and the server are enriched.
In the second embodiment, as a preferable mode, the electronic device obtains the processing result as follows: and the server processes the M frames of first images and the N frames of second images to generate a processing result. The M frames of first images are extracted from the first video stream and comprise M frames of first images of users, the N frames of second images are correspondingly extracted from the second video stream based on the M frames of first images of users, and M and N are integers greater than or equal to 1.
EXAMPLE III
Referring to fig. 3, fig. 3 is a flowchart of an information processing method disclosed in the third embodiment of the present invention. The information processing method is applied to the server. The information processing method includes:
step S31: and receiving a first video stream and a second video stream transmitted by the electronic equipment.
The first video stream is generated by a first camera of the first electronic device, the second video stream is generated by a second video stream of the first electronic device, and the first camera and the second camera of the electronic device are arranged on different sides.
Step S32: and processing the first video stream and the second video stream to generate a processing result.
Step S33: the results are processed to the electronic device.
In the information processing method disclosed in the third embodiment of the present invention, after receiving the first video stream and the second video stream sent by the electronic device, the server processes the first video stream and the second video stream, and sends the processing result to the electronic device. Because the processing objects of the server are the first video stream and the second video stream and contain more data, the server can generate more accurate and rich processing results, thereby enriching the functions of the electronic equipment and the server.
Example four
Referring to fig. 4, fig. 4 is a flowchart of an information processing method according to a fourth embodiment of the present invention. The information processing method is applied to the server. The information processing method includes:
step S41: and receiving a first video stream and a second video stream transmitted by the electronic equipment.
The first video stream is formed by performing image acquisition on a first camera of first electronic equipment, the second video stream is formed by simultaneously performing image acquisition on a second camera of the first electronic equipment, the first video stream comprises images of a user, and the second video stream comprises images of a scene where the user is located.
In the process of image acquisition by a first camera and a second camera in the electronic equipment, the first camera is opposite to a user and can acquire images of the user, and a first video stream formed by the first camera comprises the images of the user. Because the first camera and the second camera are located on different sides of the electronic device, in the process of acquiring the user image by the first camera, the second camera can acquire the image of the scene where the user is located, and the second video stream formed by the second camera comprises the image of the scene where the user is located.
Step S42: m frames of a first image including a user are extracted in a first video stream.
Step S43: and correspondingly extracting N frames of second images in the second video stream based on the M frames of first images of the user.
Step S44: and processing based on the M frames of first images and the N frames of second images to generate a processing result.
The above-mentioned steps S42 to S44 are an embodiment of processing the first video stream and the second video stream. Wherein M and N are integers greater than or equal to 1.
In implementation, the following method may be adopted to extract M frames of first images including users in the first video stream: the images at a specific time are extracted in the first video stream, or the images are extracted at predetermined time intervals in the first video stream.
In implementation, the following method may also be adopted to extract M frames of first images including users in the first video stream: analyzing the first video stream, and extracting corresponding images from the first video stream when determining that a specific expression or action occurs to the user; or, the first video stream is analyzed, and when the parameter that the user speaks a specific word or the sound made by the user reaches a preset value is determined, the corresponding image is extracted from the first video stream.
Correspondingly extracting N frames of second images from a second video stream based on M frames of first images of a user, specifically: and extracting corresponding N frames of second images from the second video stream according to the acquisition time of the M frames of first images, wherein the acquisition time of the N frames of second images corresponds to the acquisition time of the M frames of first images one by one. Since the first video stream and the second video stream are captured simultaneously, the second video stream contains images of the same capture moment for each capture moment in the first video stream.
Step S45: and sending the processing result to the electronic equipment.
In the information processing method disclosed in the fourth embodiment of the present invention, after receiving a first video stream and a second video stream sent by an electronic device, a server extracts M frames of first images of a user from the first video stream, extracts N frames of second images in the second video stream based on the M frames of first images of the user, and then processes the M frames of first images and the N frames of second images to obtain a processing result.
In the fourth embodiment, the processing is performed based on M frames of the first image and N frames of the second image, and the following manner may be adopted:
obtaining biometric information of the user based on the M frames of the first image; judging whether the biological characteristic information meets a preset condition or not; and processing the N frames of second images to generate a processing result when the biological characteristic information meets the preset condition.
The biometric information of the user includes, but is not limited to: the facial feature information of the user, the fingerprint feature information of the user, the iris feature information of the user, the expression information of the user, the limb information of the user, and the voice information of the user.
In practice, there are various embodiments for determining whether the biometric information satisfies the predetermined condition, and accordingly, there are various embodiments for processing N second images when the biometric information satisfies the predetermined condition. The following description will be made with reference to example five and example six.
EXAMPLE five
Referring to fig. 5, fig. 5 is a flowchart of an information processing method disclosed in the fifth embodiment of the present invention. The information processing method is applied to the server. The information processing method includes:
step S51: and receiving a first video stream and a second video stream transmitted by the electronic equipment.
The first video stream is formed by collecting images in real time by a first camera of first electronic equipment, the second video stream is formed by collecting images simultaneously by a second camera of the first electronic equipment, the first video stream comprises images of a user, and the second video stream comprises images of a scene where the user is located.
Step S52: m frames of a first image including a user are extracted in a first video stream.
Step S53: and correspondingly extracting N frames of second images in the second video stream based on the M frames of first images of the user.
Step S54: biometric information of the user is obtained based on the M frames of the first image.
Step S55: and judging whether the biological characteristic information meets a preset condition for representing the attention of the user.
When the biological characteristic information of the user meets a preset condition for representing the attention degree of the user, the fact that the attention point of the user is contained in the N frames of second images is indicated. For example: the N-frame second image includes an image of a building focused by the user, an image of a brand car focused by the user, an image of a person focused by the user, an image of a work focused by the user, and the like.
The biometric information of the user includes facial expression information of the user, limb information of the user, and voice information of the user. The predetermined conditions characterizing the user's attention include, but are not limited to: 1. the voice information of the user includes a specific word, for example, the voice information of the user includes a word indicating the actions of others, such as "quick look", or a language word such as "o"; 2. the volume or pitch of the user's voice reaches a predetermined value; 3. the limb information of the user includes specific limb information, for example, a limb movement of the user to move a finger in a certain direction, a limb movement of the user to shake the arm, a limb movement of the user to cheer and jump, or a limb movement of the user to watch the same area for a long time; 4. the facial expression information of the user includes specific facial expression information.
Step S56: when the biological feature information satisfies a predetermined condition for characterizing the user attention, object information of an object contained in the N frames of second images is obtained through image analysis.
The object information of the object contained in the second image can be an image of an object in the second image, such as an image of a building, an image of a person, or an image of L ogo (logo).
Step S57: data content associated with the object information is obtained as a result of the processing.
The server searches by using the obtained object information to obtain the data content related to the object information. The data content may be a text introduction to the object or image information associated with the object.
Step S58: and sending the processing result to the electronic equipment.
This is illustrated with reference to an example. Two opposite sides of the electronic equipment are respectively provided with a first camera and a second camera. The user starts the first camera and the second camera to shoot, wherein the first camera faces to the user, under the condition, the first camera collects images in real time to generate a first video stream containing user images, and meanwhile, the second camera collects images in real time to generate a second video stream containing scenes where the user is located. The electronic device transmits the first video stream and the second video stream to the server in real time.
The server extracts a first image from the first video stream according to a preset time interval, and correspondingly extracts a second image from the second video stream. The server analyzes the first image, obtains the biological feature information of the user, analyzes the extracted second image when the biological feature information of the user meets a preset condition for representing the attention degree of the user (for example, the user watches a certain area for a long time), obtains the image of the vehicle contained in the second image, then searches based on the image of the vehicle, obtains the word introduction of the vehicle or the related image of the vehicle of the model as a processing result, and transmits the processing result to the electronic equipment.
And after receiving the processing result transmitted by the server, the electronic equipment displays the processing result through a display device of the electronic equipment.
And if the electronic equipment is a mobile phone, displaying the received processing result on a display screen by the mobile phone.
If the electronic device is an AR device (enhanced display device), the AR device adjusts the display effect of a specific area of the display screen after receiving the processing result transmitted by the server, and displays the processing result in the specific area. The user is able to see the real vehicle through the display screen of the AR device while seeing the processing results displayed in a particular area of the display screen.
If the electronic equipment is VR equipment (virtual display equipment), the VR equipment receives the processing result transmitted by the server and then displays the received processing result and the current virtual image simultaneously through the display screen. It should be noted here that the current virtual image of the VR device may be: and the virtual image is formed after the image shot by the second camera is subjected to virtual processing.
In the information processing method disclosed in the fifth embodiment of the present invention, after receiving a first video stream and a second video stream sent by an electronic device, a server extracts M frames of first images of a user from the first video stream, correspondingly extracts N frames of second images of a scene where the user is located from the second video stream, then determines biometric information of the user based on the M frames of first images, and when the biometric information of the user satisfies a predetermined condition for representing a user attention, obtains object information of an object included in the N frames of second images, then obtains data content associated with the object information as a processing result, and sends the processing result to the electronic device. Based on the information processing method disclosed by the fifth embodiment of the invention, the server determines the object concerned by the user in the second video stream according to the user image in the first video stream, then obtains the data content associated with the object concerned by the user, and sends the data content to the electronic equipment.
EXAMPLE six
Referring to fig. 6, fig. 6 is a flowchart of an information processing method disclosed in the fifth embodiment of the present invention. The information processing method is applied to the server. The information processing method includes:
step S61: and receiving a first video stream and a second video stream transmitted by the electronic equipment.
The first video stream is formed by collecting images in real time by a first camera of first electronic equipment, the second video stream is formed by collecting images simultaneously by a second camera of the first electronic equipment, the first video stream comprises images of a user, and the second video stream comprises images of a scene where the user is located.
Step S62: m frames of a first image including a user are extracted in a first video stream.
Step S63: and correspondingly extracting N frames of second images in the second video stream based on the M frames of first images of the user.
Step S64: biometric information of the user is obtained based on the M frames of the first image.
Step S65: it is determined whether the biometric information satisfies a predetermined condition for characterizing the identity of the user.
The biometric information of the user includes facial feature information of the user, fingerprint feature information of the user, and iris feature information of the user. The predetermined conditions characterizing the identity of the user include, but are not limited to: matching the facial feature information of the user with the pre-stored facial feature information; the fingerprint characteristic information of the user is matched with the pre-stored fingerprint characteristic information; the iris characteristic information of the user is matched with the pre-stored iris characteristic information.
Step S66: and when the biological characteristic information meets a preset condition for representing the identity of the user, obtaining the environmental information in the N frames of second images through image analysis.
And when the biological characteristic information of the user meets a preset condition for representing the identity of the user, carrying out image analysis on the N frames of second images and determining the environmental information in the N frames of second images. Wherein, the environment information may be: the scene of the user contains the objects. The environment information may also be: the scene of the user comprises objects and the positions of the objects.
Step S67: and determining whether the environment information belongs to a plurality of safety environments corresponding to the user identities.
Step S68: when the environment information belongs to a plurality of safety environments corresponding to the user identities, the identity verification based on the biological characteristic information is determined to be successful, and the verification success result is used as a processing result
Step S69: and when the environment information does not belong to a plurality of safety environments corresponding to the user identities, determining that the identity authentication based on the biological characteristic information fails, and taking the authentication failure result as a processing result.
A plurality of safety environments are preset by a user and uploaded to a server. And after obtaining the environmental information in the N frames of second images, the server judges whether the environmental information belongs to a plurality of safety environments corresponding to the user identity. If the environment information belongs to a plurality of security environments corresponding to the user identity, determining that the user identity authentication is successful, and if the environment information does not belong to a plurality of security environments corresponding to the user identity, determining that the user identity authentication is failed.
Step S610: and sending the processing result to the electronic equipment.
In the information processing method disclosed by the sixth embodiment of the present invention, after receiving a first video stream and a second video stream sent by an electronic device, a server extracts M frames of first images of a user from the first video stream, correspondingly extracts N frames of second images of a scene where the user is located from the second video stream, then determines biometric information of the user based on the M frames of first images, and obtains environment information in the N frames of second images when the biometric information of the user satisfies a predetermined condition for representing the identity of the user, and only when the obtained environment information belongs to a plurality of secure environments corresponding to the identity of the user, it is determined that the user identity authentication is successful. According to the information processing method disclosed by the sixth embodiment of the invention, double verification is carried out according to the biological characteristic information of the user and the scene where the user is located, so that the safety of user identity verification can be improved.
EXAMPLE seven
The invention also discloses an electronic device, the structure of which is shown in fig. 7, and the electronic device comprises a first camera 101, a second camera 102, a communication module 103, a processor 104 and a memory 105.
The communication module 103 is used for transceiving data.
The memory 105 is used for storing data required by the processor 104 during operation and data generated by the processor 104 during operation.
The first camera 101 and the second camera 102 are disposed on different sides of the electronic device.
The processor 104 obtains a trigger instruction for instructing to start the first camera 101 and the second camera 102, based on the trigger instruction, the first camera 101 and the second camera 102 are started, a first video stream is obtained through the first camera 101 and a second video stream is obtained through the second camera 102, the communication module 103 is controlled to transmit the first video stream and the second video stream to the server in real time, and a processing result of the server for the first video stream and the second video stream is obtained through the communication module 103.
The triggering instruction can be input by a user and can also be generated by the following modes: when the electronic equipment calls the identity authentication module, the identity authentication module generates a trigger instruction. It should be noted that the identity authentication module may be called by a system of the electronic device, or may be called by an application in the electronic device, such as a payment-type application, and an unlock application.
According to the electronic equipment disclosed by the invention, the two cameras are used for respectively generating the video streams and transmitting the two video streams to the server in real time, so that the server can process based on more data, more accurate and rich processing results are generated, and the functions of the electronic equipment and the server are enriched.
As an embodiment, the processor 104, in obtaining the first video stream through the first camera 101 and obtaining the second video stream through the second camera 102, is specifically configured to: the first camera 101 is used for collecting images in real time to form a first video stream, and the second camera 102 is used for collecting images in real time to form a second video stream, wherein the first video stream comprises images of users, and the second video stream comprises images of scenes where the users are located.
In the process of image acquisition by a first camera 101 and a second camera 102 in the electronic device, the first camera 101 is opposite to a user and can acquire an image of the user, and a first video stream formed by the first camera 101 includes the image of the user. Because the first camera 101 and the second camera 102 are located on different sides of the electronic device, in the process of acquiring the user image by the first camera 101, the second camera 102 can acquire the image of the scene where the user is located, and the second video stream formed by the second camera 102 includes the image of the scene where the user is located.
It should be noted that the electronic device disclosed in the eighth embodiment of the present invention may be a mobile phone, a tablet computer, a VR device, an AR device, or another electronic device with two cameras.
Example eight
The invention also discloses a server, the structure of which is shown in fig. 8, comprising a communication module 201, a processor 202 and a memory 203.
The communication module 201 is used for transceiving data.
The memory 203 is used for storing data required by the processor 202 during operation and data generated by the processor 202 during operation.
The processor 202 receives the first video stream and the second video stream sent by the electronic device through the communication module 201, processes the first video stream and the second video stream to generate a processing result, and sends the processing result to the electronic device through the communication module 201. The first video stream is generated by a first camera of the electronic equipment, the second video stream is generated by a second video stream of the electronic equipment, and the first camera and the second camera of the electronic equipment are arranged on different sides.
The server disclosed by the invention receives the first video stream and the second video stream sent by the electronic equipment, processes the first video stream and the second video stream, and sends the processing result to the electronic equipment. Because the processing objects of the server are the first video stream and the second video stream and contain more data, the server can generate more accurate and rich processing results, thereby enriching the functions of the electronic equipment and the server.
As an implementation manner, a first camera and a second camera of an electronic device perform image acquisition simultaneously to form a first video stream and a second video stream, where the first video stream includes an image of a user, and the second video stream includes an image of a scene where the user is located.
The processor 202 is specifically configured to, in processing the first video stream and the second video stream to generate a processing result: the processor 202 extracts M frames of first images including a user in the first video stream, extracts N frames of second images corresponding to the M frames of first images based on the user in the second video stream, and performs processing based on the M frames of first images and the N frames of second images to generate processing results, where M and N are integers greater than or equal to 1.
As an embodiment, the server, in terms of performing processing based on the M frames of the first image and the N frames of the second image, is specifically configured to: the processor obtains the biological characteristic information of the user based on the M frames of first images, judges whether the biological characteristic information meets a preset condition, and processes the N frames of second images to generate a processing result when the biological characteristic information meets the preset condition.
As an example, the server, in processing the N frames of the second image, is specifically configured to: and when the biological characteristic information meets a preset condition for representing the attention of the user, the processor obtains object information of the object contained in the N frames of second images through image analysis, and obtains data content associated with the object information as a processing result.
As another example, the server, in processing the N frames of the second image, is specifically configured to: the processor obtains environment information in the N frames of second images through image analysis when the biological characteristic information meets a preset condition for representing the identity of the user, determines whether the environment information belongs to a plurality of safety environments corresponding to the identity of the user, determines that the identity verification based on the biological characteristic information succeeds when the environment information belongs to the plurality of safety environments corresponding to the identity of the user, and takes a verification success result as a processing result.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the electronic device and the server disclosed by the embodiment, the description is relatively simple because the electronic device and the server correspond to the method disclosed by the embodiment, and the relevant points can be referred to the description of the method part.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. An information processing method, the method comprising:
the method comprises the steps of obtaining a trigger instruction, wherein the trigger instruction is used for indicating to start a first camera and a second camera of the electronic equipment, and the first camera and the second camera are arranged on different side surfaces of the electronic equipment;
starting the first camera and the second camera based on the trigger instruction;
obtaining a first video stream through the first camera and simultaneously obtaining a second video stream through the second camera;
transmitting the first video stream and the second video stream to a server in real time, wherein the first video stream and the second video stream are used as processing objects of the server;
obtaining processing results of the first video stream and the second video stream, where the processing results are processing results generated by processing based on M frames of first images and N frames of second images sent by the server, the M frames of first images are M frames of first images including a user extracted from the first video stream, the N frames of second images are N frames of second images correspondingly extracted from the second video stream based on the M frames of first images of the user, and M and N are integers greater than or equal to 1;
the process that the server carries out processing based on the M frames of first images and the N frames of second images comprises the following steps:
obtaining biometric information of the user based on the M frames of first images;
judging whether the biological characteristic information meets a preset condition for representing the attention degree of the user or a preset condition for representing the identity of the user;
when the biological characteristic information meets a preset condition for representing the attention degree of the user or meets a preset condition for representing the identity of the user, processing the N frames of second images to generate a processing result;
when the biological characteristic information meets the preset condition, processing the N frames of second images to generate a processing result, wherein the processing result comprises:
when the biological characteristic information meets a preset condition for representing the attention of a user, obtaining object information of an object contained in the N frames of second images through image analysis;
obtaining data content associated with the object information as a processing result;
or;
when the biological characteristic information meets a preset condition for representing the identity of a user, obtaining environmental information in the N frames of second images through image analysis;
determining whether the environment information belongs to a plurality of safety environments corresponding to the user identities;
when the environment information belongs to a plurality of safety environments corresponding to the user identities, determining that the identity verification based on the biological characteristic information is successful, and taking a verification success result as a processing result;
and when the environment information does not belong to a plurality of safety environments corresponding to the user identities, determining that the identity authentication based on the biological characteristic information fails, and taking the authentication failure result as a processing result.
2. The information processing method according to claim 1, wherein the obtaining of the first video stream by the first camera and simultaneously obtaining the second video stream by the second camera comprises:
and acquiring images in real time by using the first camera to form the first video stream, and acquiring images in real time by using the second camera to form the second video stream, wherein the first video stream comprises images of users, and the second video stream comprises images of scenes where the users are located.
3. The information processing method according to claim 1, wherein the obtaining of the trigger instruction includes: and when the electronic equipment calls an identity authentication module, acquiring a trigger instruction generated by the identity verification module.
4. An information processing method, the method comprising:
receiving a first video stream and a second video stream sent by electronic equipment, wherein the first video stream is generated by a first camera of the electronic equipment, the second video stream is generated by a second camera which performs video acquisition simultaneously with the first camera in the electronic equipment, and the first camera and the second camera of the electronic equipment are arranged on different side surfaces;
processing the first video stream and the second video stream, and extracting M frames of first images including users from the first video stream; correspondingly extracting N frames of second images in the second video stream based on the M frames of first images of the user, processing the second images based on the M frames of first images and the N frames of second images to generate a processing result, wherein the M frames of first images are the M frames of first images including the user extracted in the first video stream, the N frames of second images are the N frames of second images correspondingly extracted in the second video stream based on the M frames of first images of the user, and M and N are integers greater than or equal to 1;
sending the processing result to the electronic equipment;
the process of processing based on the M frames of the first image and the N frames of the second image includes:
obtaining biometric information of the user based on the M frames of first images;
judging whether the biological characteristic information meets a preset condition for representing the attention degree of the user or a preset condition for representing the identity of the user;
when the biological characteristic information meets a preset condition for representing the attention degree of the user or meets a preset condition for representing the identity of the user, processing the N frames of second images to generate a processing result;
when the biological characteristic information meets the preset condition, processing the N frames of second images to generate a processing result, wherein the processing result comprises:
when the biological characteristic information meets a preset condition for representing the attention of a user, obtaining object information of an object contained in the N frames of second images through image analysis;
obtaining data content associated with the object information as a processing result;
or;
when the biological characteristic information meets a preset condition for representing the identity of a user, obtaining environmental information in the N frames of second images through image analysis;
determining whether the environment information belongs to a plurality of safety environments corresponding to the user identities;
when the environment information belongs to a plurality of safety environments corresponding to the user identities, determining that the identity verification based on the biological characteristic information is successful, and taking a verification success result as a processing result;
and when the environment information does not belong to a plurality of safety environments corresponding to the user identities, determining that the identity authentication based on the biological characteristic information fails, and taking the authentication failure result as a processing result.
5. An electronic device, comprising a first camera, a second camera, a communication module, a processor and a memory;
the communication module is used for receiving and transmitting data;
the memory is used for storing data required by the processor to run and data generated by the processor in the running process;
the first camera and the second camera are arranged on different side surfaces of the electronic equipment;
the processor obtains a trigger instruction for instructing to start the first camera and the second camera, starts the first camera and the second camera based on the trigger instruction, obtains a first video stream through the first camera and obtains a second video stream through the second camera at the same time, controls the communication module to transmit the first video stream and the second video stream to a server in real time, and obtains a processing result of the server for the first video stream and the second video stream through the communication module;
the processing procedure of the processing result of the server for the first video stream and the second video stream obtained by the communication module includes:
obtaining biometric information of a user based on extracting M frames of a first image comprising the user in the first video stream;
judging whether the biological characteristic information meets a preset condition for representing the attention degree of the user or a preset condition for representing the identity of the user;
when the biological characteristic information meets a preset condition for representing the attention degree of a user or meets a preset condition for representing the identity of the user, processing N frames of second images to generate a processing result, wherein the N frames of second images are correspondingly extracted from the second video stream based on M frames of first images of the user, and M and N are integers greater than or equal to 1;
when the biological characteristic information meets the preset condition, processing the N frames of second images to generate a processing result, wherein the processing result comprises:
when the biological characteristic information meets a preset condition for representing the attention of a user, obtaining object information of an object contained in the N frames of second images through image analysis;
obtaining data content associated with the object information as a processing result;
or;
when the biological characteristic information meets a preset condition for representing the identity of a user, obtaining environmental information in the N frames of second images through image analysis;
determining whether the environment information belongs to a plurality of safety environments corresponding to the user identities;
when the environment information belongs to a plurality of safety environments corresponding to the user identities, determining that the identity verification based on the biological characteristic information is successful, and taking a verification success result as a processing result;
and when the environment information does not belong to a plurality of safety environments corresponding to the user identities, determining that the identity authentication based on the biological characteristic information fails, and taking the authentication failure result as a processing result.
6. The electronic device of claim 5, wherein the processor, in obtaining a first video stream via the first camera and a second video stream via the second camera, is specifically configured to:
and acquiring images in real time by using the first camera to form the first video stream, and acquiring images in real time by using the second camera to form the second video stream, wherein the first video stream comprises images of users, and the second video stream comprises images of scenes where the users are located.
7. The electronic device of claim 5, wherein the trigger instruction obtained by the processor is generated when an authentication module in the electronic device is invoked.
8. A server, comprising a communication module, a processor, and a memory;
the communication module is used for receiving and transmitting data;
the memory is used for storing data required by the processor to run and data generated by the processor in the running process;
the processor receives a first video stream and a second video stream sent by the electronic equipment through the communication module, processes the first video stream and the second video stream, and extracts M frames of first images including a user from the first video stream; correspondingly extracting N frames of second images from the second video stream based on the M frames of first images of the user, processing the second images based on the M frames of first images and the N frames of second images to generate a processing result, and sending the processing result to the electronic equipment through the communication module, wherein M and N are integers greater than or equal to 1;
the process of processing based on the M frames of the first image and the N frames of the second image includes:
obtaining biometric information of the user based on the M frames of first images;
judging whether the biological characteristic information meets a preset condition for representing the attention degree of the user or a preset condition for representing the identity of the user;
when the biological characteristic information meets a preset condition for representing the attention degree of the user or meets a preset condition for representing the identity of the user, processing the N frames of second images to generate a processing result;
the first video stream is generated by a first camera of the electronic equipment, the second video stream is generated by a second camera which performs video acquisition simultaneously with the first camera in the electronic equipment, and the first camera and the second camera of the electronic equipment are arranged on different side surfaces;
in an aspect of processing the N frames of second images, the server is specifically configured to:
when the biological characteristic information meets a preset condition for representing the attention of a user, the processor obtains object information of an object contained in the N frames of second images through image analysis, and obtains data content associated with the object information, wherein the data content is used as a processing result;
or;
the processor obtains environment information in the N frames of second images through image analysis when the biological characteristic information meets a preset condition for representing the identity of a user, determines whether the environment information belongs to a plurality of safety environments corresponding to the identity of the user, determines that identity authentication based on the biological characteristic information is successful when the environment information belongs to the plurality of safety environments corresponding to the identity of the user, takes a successful authentication result as a processing result, determines that identity authentication based on the biological characteristic information is failed when the environment information does not belong to the plurality of safety environments corresponding to the identity of the user, and takes a failed authentication result as a processing result.
CN201610513223.7A 2016-06-30 2016-06-30 Information processing method, electronic equipment and server Active CN106101824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610513223.7A CN106101824B (en) 2016-06-30 2016-06-30 Information processing method, electronic equipment and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610513223.7A CN106101824B (en) 2016-06-30 2016-06-30 Information processing method, electronic equipment and server

Publications (2)

Publication Number Publication Date
CN106101824A CN106101824A (en) 2016-11-09
CN106101824B true CN106101824B (en) 2020-07-24

Family

ID=57213317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610513223.7A Active CN106101824B (en) 2016-06-30 2016-06-30 Information processing method, electronic equipment and server

Country Status (1)

Country Link
CN (1) CN106101824B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368550B (en) * 2017-06-30 2019-12-31 Oppo广东移动通信有限公司 Information acquisition method, device, medium, electronic device, server and system
CN109522789A (en) * 2018-09-30 2019-03-26 北京七鑫易维信息技术有限公司 Eyeball tracking method, apparatus and system applied to terminal device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008014301A2 (en) * 2006-07-25 2008-01-31 Qualcomm Incorporated Mobile device with dual digital camera sensors and methods of using the same
CN103379256A (en) * 2012-04-25 2013-10-30 华为终端有限公司 Method and device for processing image
CN103905430A (en) * 2014-03-05 2014-07-02 广州华多网络科技有限公司 Real-name authentication method and system
CN104575137A (en) * 2015-01-19 2015-04-29 肖龙英 Split-type scene interaction multimedia intelligent terminal
CN104967553A (en) * 2015-04-30 2015-10-07 广东欧珀移动通信有限公司 Message interaction method, related device and communication system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101356269B1 (en) * 2009-09-08 2014-01-29 주식회사 팬택 Mobile terminal with dual camera and method for image processing using the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008014301A2 (en) * 2006-07-25 2008-01-31 Qualcomm Incorporated Mobile device with dual digital camera sensors and methods of using the same
CN103379256A (en) * 2012-04-25 2013-10-30 华为终端有限公司 Method and device for processing image
CN103905430A (en) * 2014-03-05 2014-07-02 广州华多网络科技有限公司 Real-name authentication method and system
CN104575137A (en) * 2015-01-19 2015-04-29 肖龙英 Split-type scene interaction multimedia intelligent terminal
CN104967553A (en) * 2015-04-30 2015-10-07 广东欧珀移动通信有限公司 Message interaction method, related device and communication system

Also Published As

Publication number Publication date
CN106101824A (en) 2016-11-09

Similar Documents

Publication Publication Date Title
US10275672B2 (en) Method and apparatus for authenticating liveness face, and computer program product thereof
CN109726624B (en) Identity authentication method, terminal device and computer readable storage medium
CN106156578B (en) Identity verification method and device
WO2018113526A1 (en) Face recognition and voiceprint recognition-based interactive authentication system and method
CN109254669B (en) Expression picture input method and device, electronic equipment and system
CN108010526B (en) Voice processing method and device
CN109992237B (en) Intelligent voice equipment control method and device, computer equipment and storage medium
CN113177437A (en) Face recognition method and device
CN108712667B (en) Smart television, screen capture application method and device thereof, and readable storage medium
US11076091B1 (en) Image capturing assistant
CN107133567B (en) woundplast notice point selection method and device
CN106101824B (en) Information processing method, electronic equipment and server
CN112911192A (en) Video processing method and device and electronic equipment
CN111565298B (en) Video processing method, device, equipment and computer readable storage medium
CN112399239A (en) Video playing method and device
CN115376187A (en) Device and method for detecting speaking object in multi-user-computer interaction scene
CN112866577B (en) Image processing method and device, computer readable medium and electronic equipment
CN113157174B (en) Data processing method, device, electronic equipment and computer storage medium
CN116704405B (en) Behavior recognition method, electronic device and storage medium
CN111274925A (en) Method and device for generating recommended video, electronic equipment and computer storage medium
CN111627039A (en) Interaction system and interaction method based on image recognition
CN109886084A (en) Face authentication method, electronic equipment and storage medium based on gyroscope
CN109740557A (en) Method for checking object and device, electronic equipment and storage medium
CN112165626B (en) Image processing method, resource acquisition method, related equipment and medium
CN110990607B (en) Method, apparatus, server and computer readable storage medium for screening game photos

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant