CN111093028A - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN111093028A
CN111093028A CN201911415740.0A CN201911415740A CN111093028A CN 111093028 A CN111093028 A CN 111093028A CN 201911415740 A CN201911415740 A CN 201911415740A CN 111093028 A CN111093028 A CN 111093028A
Authority
CN
China
Prior art keywords
information
image
target
image information
target information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911415740.0A
Other languages
Chinese (zh)
Inventor
高小菊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911415740.0A priority Critical patent/CN111093028A/en
Publication of CN111093028A publication Critical patent/CN111093028A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals

Abstract

The application provides an information processing method, which comprises the steps of obtaining environmental information around first equipment, wherein the environmental information at least comprises first image information; processing the first image information; if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information; and when the target information and the non-target information are output by the second equipment, the scaling is different. According to the scheme, the image surrounding the first equipment is acquired, the image is analyzed to determine that the image comprises the target information and the non-target information, when the target information and the non-target information in the first image information are output by the second equipment, different scaling ratios are adopted, specifically, the target information can be amplified and output, so that a user of the second equipment can see the amplified target information in the display content, and the effect of video call is improved.

Description

Information processing method and electronic equipment
Technical Field
The present application relates to the field of electronic devices, and in particular, to an information processing method and an electronic device.
Background
Video calls are a way of communicating so that people at two or more locations have a face-to-face conversation via a communication device and a network.
In the existing video call, a camera directly transmits a shot video image to an opposite side, but the opposite side cannot see specific image content details related in the video image, so that the video call effect is poor.
Disclosure of Invention
In view of this, the present application provides an information processing method, which solves the problem in the prior art that during a video call, an opposite person cannot see details of a specific content in a video image.
In order to achieve the above purpose, the present application provides the following technical solutions:
an information processing method comprising:
obtaining environmental information around a first device, the environmental information including at least first image information;
processing the first image information;
if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information;
and when the target information and the non-target information are output by the second equipment, the scaling is different.
Preferably, the method for processing the first image information includes:
and analyzing whether the first image information contains information meeting a target to obtain an analysis result, wherein the target information meets a preset condition.
Preferably, the method for analyzing whether the first image information includes target information that satisfies a preset condition to obtain an analysis result includes:
comparing the character image features in the preset database with the first image information, screening to obtain an image area matched with the character image features in the first image, and representing that the first image information comprises target character information according to an analysis result.
Preferably, in the method, the environmental information further includes at least two sets of environmental audio information collected by at least two audio collecting units, and the analyzing whether the first image information includes target information meeting a preset condition to obtain an analysis result includes:
analyzing according to the at least two groups of environmental audio information to obtain a sounding position for sounding the audio information;
analyzing to obtain a second image area corresponding to the sounding position in the first image information based on the setting positions of the at least two audio acquisition units, the position of an image acquisition unit for acquiring the first image information and the first image information;
and comparing the character image features in a preset database with the second image area corresponding to the sound production position, analyzing to obtain an image area matched with the character image features in the first image information, and representing that the first image information comprises target character information according to an analysis result.
Preferably, the method for analyzing whether the first image information includes target information that satisfies a preset condition to obtain an analysis result includes:
analyzing the first image information according to a preset object feature analysis rule to obtain an image area which meets the object feature analysis rule in the first image, wherein an analysis result represents that the first image information comprises target object information;
or
Analyzing the first image information according to a preset object characteristic analysis rule to obtain that the first image comprises at least two first image areas meeting the object characteristic analysis rule, wherein the first image areas correspond to a first object for recording information; the method comprises the steps that based on the fact that a picture of a writing area of any first object at a first moment is different from a picture of the writing area at a second moment, the first object is judged to be a target object, an analysis result represents that first image information comprises target object information, and the first moment is adjacent to the second moment.
Preferably, in the method, the analyzing the first image information according to a preset object feature analysis rule to obtain an image area in the first image, where the image area meets the object feature analysis rule, includes:
comparing the preset object boundary/region image characteristics with the first image information, and screening to obtain a second image region matched with the object boundary/region image characteristics in the first image;
and combining the second image areas according to the distribution rule of the second image areas in the first image information to obtain target object information.
Preferably, the method further comprises:
intercepting third image information from the first image information, wherein the third image information comprises the target information;
and outputting the first image information and the third image information to a second device, so that the second device displays the first image information at a first scaling and displays the third image information at a second scaling, wherein the first scaling is different from the second scaling.
An electronic device, comprising:
the device comprises an image acquisition unit, a processing unit and a processing unit, wherein the image acquisition unit is used for acquiring environmental information around first equipment, and the environmental information at least comprises first image information;
a processing unit for processing the first image information; if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information;
and the communication unit is used for sending the target information and the non-target information to the second equipment, and when the target information and the non-target information are output by the second equipment, the scaling is different.
Preferably, the electronic device further includes:
the system comprises at least two audio acquisition units, a first audio processing unit and a second audio processing unit, wherein the audio acquisition units are used for acquiring environmental audio information around first equipment;
the processing unit is used for analyzing and obtaining the sounding position of the audio information according to the at least two groups of environmental audio information; analyzing to obtain a second image area corresponding to the sounding position in the first image information based on the setting positions of the at least two audio acquisition devices, the position of an image acquisition device for acquiring the first image information and the first image information; and comparing the character image features in a preset database with the second image area corresponding to the sound production position, analyzing to obtain an image area matched with the character image features in the first image information, and representing that the first image information comprises target character information according to an analysis result.
An electronic device, comprising:
the camera is used for obtaining environmental information around the first equipment, and the environmental information at least comprises first image information;
a processor for processing the first image information; if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information;
and the communication interface is used for sending the target information and the non-target information to the second equipment, and when the target information and the non-target information are output by the second equipment, the scaling is different.
Compared with the prior art, the information processing method has the advantages that the environmental information around the first device is obtained, and the environmental information at least comprises the first image information; processing the first image information; if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information; and when the target information and the non-target information are output by the second equipment, the scaling is different. According to the scheme, the image surrounding the first equipment is acquired, the image is analyzed to determine that the image comprises the target information and the non-target information, when the target information and the non-target information in the first image information are output by the second equipment, different scaling ratios are adopted, specifically, the target information can be amplified and output, so that a user of the second equipment can see the amplified target information in the display content, and the effect of video call is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an information processing method according to embodiment 1 of the present application;
fig. 2 is a schematic view of a display interface of a second device in embodiment 1 of an information processing method provided in the present application;
fig. 3 is a flowchart of an information processing method according to embodiment 2 of the present application;
fig. 4 is a flowchart of an information processing method according to embodiment 3 of the present application;
fig. 5 is a flowchart of an information processing method according to embodiment 4 of the present application;
fig. 6 is a schematic view of an application scenario in an embodiment 4 of an information processing method provided in the present application;
fig. 7 is a flowchart of an information processing method according to embodiment 5 of the present application;
fig. 8 is a flowchart of an embodiment 6 of an information processing method provided in the present application;
fig. 9 is a flowchart of an embodiment 7 of an information processing method provided in the present application;
fig. 10 is a schematic view of a display interface of a second device in embodiment 7 of an information processing method provided in the present application;
fig. 11 is a schematic view of another display interface of the second device in embodiment 7 of the information processing method provided in the present application;
fig. 12 is a schematic view of still another display interface of the second device in embodiment 7 of the information processing method provided in the present application;
fig. 13 is a schematic structural diagram of an electronic device in embodiment 1 provided in the present application;
fig. 14 is a schematic structural diagram of an electronic device in embodiment 2 provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As shown in fig. 1, a flowchart of embodiment 1 of an information processing method provided in the present application is applied to an electronic device, which is used as a first device in the present application, and the method includes the following steps:
step S101: obtaining environmental information around a first device, the environmental information including at least first image information;
the environment information may be image information, or may be a combination of image information and other types of information.
For example, a combination of the image information and the audio information will be described in detail in the following embodiments, and the detailed description is omitted in this embodiment.
The first image information of the first device may be a plane image information acquired by a camera and subjected to preliminary processing, and the camera is a device set on the first device or connected to the first device.
In a specific implementation, the camera may be a wide-angle (fish-eye) lens disposed upward, and the camera collects original image information, and the collected original image is an image of a hemispherical range centered on the lens.
Specifically, the original image information is processed, an edge portion is selected, and the original image information is cut into annular image information, where the annular image information includes an image corresponding to a horizontal panoramic range, and the annular image information is the first image information.
In specific implementation, a specific function option/key may be set, the step of starting processing the first image information is implemented based on the selection operation of the user, and if the user does not select starting, the first image information may be directly transmitted to the second device for display.
Step S102: processing the first image information;
specifically, the first image information is analyzed to determine whether it includes the target information.
The target information may be a person or an object, a specific person or persons among a plurality of persons, or a specific person or objects among a plurality of objects.
The object may be a writing board or the like for writing content in a video call, and the analysis of the target information will be described in detail in the following embodiments, which are not described in detail in this embodiment.
Step S103: and if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information.
And when the target information and the non-target information are output by the second equipment, the scaling is different.
And if the first image information comprises target information, determining the first image information as the target information and non-target information.
Wherein the target information is information of a person and/or an object.
The non-target information may be the first image information, may be partial information of the first image information, the partial information including the target information, and the partial information may not include the target information, where when the non-target information does not include the target information, the non-target information may be complementary to the target information.
Specifically, when the second device outputs the target information and the non-target information, the scaling settings of the target information and the non-target information are different.
In specific implementation, the scaling of the target information is greater than the scaling of the non-target information, so that the target information can be displayed in a larger display area in a display screen of the second device, a user of the second device can see the enlarged target information in display content, and the effect of video call is improved.
The second device display interface diagram shown in fig. 2 includes target information 201 and non-target information 202, where the target information is a person a, and the non-target information is an overall image of the environment surrounding the first device, including a person ABC. Wherein, the display interface comprises a first area 203 and a second area 204; the first area is used for displaying the target information 201, and the second area 204 is used for displaying the non-target information 202. The display scale of the person A in the first area is larger than that in the second area.
In summary, in the information processing method provided in this embodiment, the environmental information around the first device is obtained, where the environmental information at least includes the first image information; processing the first image information; if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information; and when the target information and the non-target information are output by the second equipment, the scaling is different. According to the scheme, the image surrounding the first equipment is acquired, the image is analyzed to determine that the image comprises the target information and the non-target information, when the target information and the non-target information in the first image information are output by the second equipment, different scaling ratios are adopted, specifically, the target information can be amplified and output, so that a user of the second equipment can see the amplified target information in the display content, and the effect of video call is improved.
As shown in fig. 3, a flowchart of embodiment 2 of an information processing method provided by the present application includes the following steps:
step S301: obtaining environmental information around a first device, the environmental information including at least first image information;
step S301 is the same as step S101 in embodiment 1, and details are not described in this embodiment.
Step S302: analyzing whether the first image information contains information meeting the target or not to obtain an analysis result;
wherein the target information satisfies a preset condition.
Specifically, the preset condition may be set to be different according to the target information.
For example, when the target information is a person, the preset condition is a condition corresponding to the person; when the target information is a specific object, the preset condition is a condition corresponding to the specific object.
It should be noted that, because the process of acquiring the original image by the image acquisition unit is a dynamic continuous process, the original image is processed in real time to continuously obtain the first image information, and the first image information is continuously analyzed.
In a specific implementation, the first image information may be an image of one frame or may be an image of several consecutive frames.
The analysis process will be explained in detail in the following embodiments, which are not described in detail in this embodiment.
Step S303: if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information;
step S303 is the same as step S103 in embodiment 1, and details are not described in this embodiment.
In summary, in an information processing method provided in this embodiment, the processing the first image information includes: and analyzing whether the first image information contains information meeting a target to obtain an analysis result, wherein the target information meets a preset condition. In the scheme, a preset condition is set for the target information, so that the first information is analyzed based on the preset condition to determine whether the first information contains the target information.
As shown in fig. 4, a flowchart of embodiment 3 of an information processing method provided by the present application includes the following steps:
step S401: obtaining environmental information around a first device, the environmental information including at least first image information;
step S401 is the same as step S301 in embodiment 2, and details are not described in this embodiment.
Step S402: comparing the character image features in a preset database with the first image information, screening to obtain an image area matched with the character image features in the first image, and representing that the first image information comprises target character information according to an analysis result;
the first device is preset with a database, the database stores a large amount of character image features, and the character image features in the database can be pre-stored in the database.
In a specific implementation, the person corresponding to the image feature of the person may be a designated person or a general person.
When some specified persons of the persons corresponding to the image characteristics of the persons are identified, the image characteristics of the persons comprise the facial detail characteristics of the persons, such as the detail characteristics of eyebrows, noses, ears and the like;
when the person image features generally refer to persons, the person feature information includes features of the faces, such as at least one of eyebrows, noses, ears and the like distributed in a certain area according to a specific distribution mode.
Specifically, when an image area matching with the character image feature is screened from first image information, the first image information includes target character information, and if some specified people corresponding to the character image feature include specified characters in the screening result, the determined target character is the specified character; if the task image feature generally indicates a person, the result of the screening includes that the person is in the first image information.
Step S403: and if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information.
Step S403 is the same as step S303 in embodiment 2, and details are not described in this embodiment.
In summary, in an information processing method provided in this embodiment, the analyzing whether the first image information includes target information that satisfies a preset condition to obtain an analysis result includes: comparing the character image features in the preset database with the first image information, screening to obtain an image area matched with the character image features in the first image, and representing that the first image information comprises target character information according to an analysis result. In the scheme, the preset character image characteristics are compared with the first image information, the image area matched with the character image characteristics in the first image is screened out, and the target character information in the first image information can be determined through image characteristic comparison.
As shown in fig. 5, a flowchart of embodiment 4 of an information processing method provided by the present application includes the following steps:
step S501: obtaining environmental information around a first device, the environmental information including at least first image information;
step S501 is the same as step S301 in embodiment 2, and details are not described in this embodiment.
Step S502: analyzing according to at least two groups of environmental audio information to obtain a sounding position for sounding the audio information;
the environment information also comprises at least two groups of environment audio information collected by at least two audio collecting units.
Specifically, the environment in which the first device is disposed is further provided with a plurality of audio collecting units, such as microphones, for collecting audio information in the environment.
It should be noted that, in the propagation process, the sound is transmitted to each direction with the sounding position as the center of sphere, the audio information acquired by the audio acquisition units at different positions is not exactly the same, and the differences are specifically loudness, receiving audio time and the like (not limited thereto), and the audio information acquired by each audio acquisition unit is analyzed based on the parameters such as the loudness, the time difference and the like, so that the sounding position of the audio information can be obtained.
In specific implementation, the audio acquisition unit can be divided into two types, one type is a dedicated positioning audio acquisition unit (positioning unit for short), and the other type is a dedicated audio acquisition unit (detection acquisition unit) for acquiring audio.
In this embodiment, gather this environmental audio information in order to be used for the vocal position of analysis audio information, this environmental audio information can be that the positioning unit gathers and obtains, and its environmental audio information of gathering is exclusively used in analysis vocal position, and the environmental audio information of this collection unit gathering is then exclusively used in and is sent for the second equipment and play to avoid appearing leading to the fact the follow-up circumstances of transmitting for the second equipment broadcast of influence based on audio information analysis vocal position.
Step S503: analyzing to obtain a second image area corresponding to the sounding position in the first image information based on the setting positions of the at least two audio acquisition units, the position of an image acquisition device for acquiring the first image information and the first image information;
wherein a second image area corresponding to the utterance position is determined in the first image information so that target person information with respect to the utterer is determined based on the second image area.
Specifically, the relative position relationship between the sound production position and the image acquisition device is determined based on the setting position of the audio acquisition unit and the position of the image acquisition unit acquiring the first image information, and then the corresponding position in the first image information is determined based on the relative position relationship.
Fig. 6 is a schematic view of an application scene related in this embodiment, where the application scene includes an image capturing unit 601 and four audio capturing units 602, where the audio capturing units and the image capturing unit are arranged in an array, the four audio capturing units are sequentially distributed on two sides of the image capturing unit, and a speaker 603 is in an image capturing range of the image capturing unit 601 (a dotted line is used in the figure to indicate the image capturing range). The first image information acquired by the image acquisition unit contains the image of the speaker, the sound production position (namely the specific position of the speaker) can be determined based on the audio information acquired by the four audio acquisition units, and then the corresponding image area is determined in the first image information according to the sound production position in the subsequent steps.
Step S504: comparing the character image features in a preset database with a second image area corresponding to the sound production position, analyzing to obtain an image area matched with the character image features in the first image information, and representing that the first image information comprises target character information according to an analysis result;
in this embodiment, the person corresponding to the character image feature may be a designated person or a person with a general meaning.
When the person image features generally refer to people, the person feature information comprises features of human faces, if at least one of eyebrows, noses and ears is distributed in a certain area according to a specific distribution mode, if people exist at the sounding position, the audio information sent by the people is determined, and the analysis result represents that the first image information comprises target person information.
In a specific implementation, when some specified persons of the persons corresponding to the image features of the persons are identified, the image features of the persons include the facial detail features of the persons, such as the detail features of the eyebrows, the noses, the ears and the like, and correspondingly, the database can also preset sound features for the specific persons; when the sound production position is determined based on the audio information, the sound characteristics in the audio information can be analyzed, the character image characteristics matched with the sound characteristics are searched from the database, whether the second image area at the sound production position is the corresponding character is judged based on the character image characteristics, if yes, the image area matched with the character image characteristics in the first image information is judged, and the analysis result represents that the first image information comprises the target character information.
Step S505: and if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information.
Step S505 is the same as step S303 in embodiment 2, and details are not described in this embodiment.
In summary, in an information processing method provided in this embodiment, the analyzing whether the first image information includes target information that satisfies a preset condition to obtain an analysis result includes: analyzing according to the at least two groups of environmental audio information to obtain a sounding position for sounding the audio information; analyzing to obtain a second image area corresponding to the sounding position in the first image information based on the setting positions of the at least two audio acquisition devices, the position of an image acquisition device for acquiring the first image information and the first image information; and comparing the character image features in a preset database with the second image area corresponding to the sound production position, analyzing to obtain an image area matched with the character image features in the first image information, and representing that the first image information comprises target character information according to an analysis result. According to the scheme, the sound production position is determined by analyzing the multiple groups of environmental audio information, corresponding target character information is determined in the first image information, and the target character information in the first image information is determined by audio positioning and image characteristic comparison.
As shown in fig. 7, a flowchart of embodiment 5 of an information processing method provided by the present application includes the following steps:
step S701: obtaining environmental information around a first device, the environmental information including at least first image information;
step S701 is the same as step S301 in embodiment 2, and details are not described in this embodiment.
Step S702: analyzing the first image information according to a preset object feature analysis rule to obtain an image area which meets the object feature analysis rule in the first image, wherein an analysis result represents that the first image information comprises target object information;
in this embodiment, whether the first image information includes specific target object information is analyzed and determined.
The target object is in particular a structural device for recording information objects, such as whiteboards, blackboards, frosted glass, transparent glass, cardboard, tablets, displays, etc., which is capable of recording (including inputting) information.
The first device is preset with a database, the database stores a large amount of object features, and the object features in the database can be pre-stored in the database.
In a specific implementation, images of objects of various materials, various lighting conditions, and various angles may be input into a database, so that when first image information is acquired, images in the database are compared with the first image information to determine whether similar image areas exist in the first image information.
Specifically, step S702 includes:
step S7021: comparing the preset object boundary/region image characteristics with the first image information, and screening to obtain a second image region matched with the object boundary/region image characteristics in the first image;
the image feature of the object boundary/region in the first image is matched, specifically, the similarity between the image feature of the second image region in the first image and the image feature of the object boundary/region is greater than a certain threshold.
Specifically, the object boundary image features are used for screening an image area matched with the object boundary; the object region image features are used for screening image regions matched with writing regions in the object.
And comparing the image characteristics with the first image information based on the preset object boundary/area image characteristics in the database, and screening a plurality of second image areas matched with the object boundary image characteristics and a plurality of second image areas matched with the image characteristics of the image areas from the first image information.
Step S7022: and combining the second image areas according to the distribution rule of the second image areas in the first image information to obtain target object information.
The second image areas are respectively matched with the components of the object, and the plurality of second image areas are combined to obtain the object.
Specifically, the target object information is obtained by combining the second image regions according to the distribution rule of the second image regions in the first image information.
Step S703: and if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information.
Step S703 is the same as step S303 in embodiment 2, and is not described in detail in this embodiment.
In summary, in an information processing method provided in this embodiment, the analyzing whether the first image information includes target information that satisfies a preset condition to obtain an analysis result includes: and analyzing the first image information according to a preset object characteristic analysis rule to obtain an image area which meets the object characteristic analysis rule in the first image, wherein the analysis result represents that the first image information comprises target object information. In the scheme, the preset object image characteristics are compared with the first image information to screen out the image area matched with the object image characteristics in the first image, and the target object information in the first image information can be determined through image characteristic comparison, so that the purpose of determining the object for input/writing operation in the first image information is achieved.
As shown in fig. 8, a flowchart of embodiment 6 of an information processing method provided by the present application includes the following steps:
step S801: obtaining environmental information around a first device, the environmental information including at least first image information;
step S801 is the same as step S301 in embodiment 2, and details are not described in this embodiment.
Step S802: analyzing the first image information according to a preset object characteristic analysis rule to obtain that the first image comprises at least two first image areas meeting the object characteristic analysis rule, wherein the first image areas correspond to a first object for recording information;
in this embodiment, the first image information includes a plurality of objects for recording information, from which an object currently in use (i.e., a target object) is determined.
Wherein the first object is specifically a structural device for recording information objects, such as whiteboards, blackboards, paperboards, tablet computers, etc., capable of recording (including inputting) information.
The first device is preset with a database, the database stores a large amount of object features, and the object features in the database can be pre-stored in the database.
Specifically, the first image information is analyzed based on the object features in the database to obtain a plurality of objects contained therein.
The plurality of objects may be the same object for recording information, or may be different objects, and may be any combination of structural devices capable of recording (including inputting) information, such as a whiteboard, a blackboard, a cardboard, and a tablet computer.
Step S803: based on the fact that a writing area of any first object is different from a writing area of the writing area at a first moment in image at a second moment, the first object is judged to be a target object, and an analysis result represents that the first image information comprises target object information;
wherein the first time is adjacent to the second time.
Each object comprises a writing area, the writing area is specifically used for displaying writing contents, and the writing area can be a writing area of a whiteboard blackboard and the like, or a display screen of a tablet computer (the display screen is used for displaying input contents) and the like.
Specifically, when the writing area changes when the user inputs/writes content in the writing area of the object, it is determined that the first object is the target object, that is, the object being used by the user is selected as the target object, based on that two adjacent time images of the writing area of the first object in the first image area are different, that is, the writing area changes.
Specifically, the first time and the second time may be analysis times corresponding to a preset period, that is, the image acquired by the image acquisition unit is analyzed according to the preset period, and when two adjacent analysis times and the writing area changes, it is considered that the user is using the object, and the object is taken as a target object.
Step S804: and if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information.
Step S804 is the same as step S303 in embodiment 2, and details are not described in this embodiment.
In summary, in an information processing method provided in this embodiment, the analyzing whether the first image information includes target information that satisfies a preset condition to obtain an analysis result includes: analyzing the first image information according to a preset object characteristic analysis rule to obtain that the first image comprises at least two first image areas meeting the object characteristic analysis rule, wherein the first image areas correspond to a first object for recording information; the method comprises the steps that based on the fact that a picture of a writing area of any first object at a first moment is different from a picture of the writing area at a second moment, the first object is judged to be a target object, an analysis result represents that first image information comprises target object information, and the first moment is adjacent to the second moment. In the scheme, the object in use is used as the target object, so that the object in use is amplified and output, the user of the second device can see the amplified target information in the display content, and the effect of video call is improved.
As shown in fig. 9, a flowchart of embodiment 7 of an information processing method provided by the present application includes the following steps:
step S901: obtaining environmental information around a first device, the environmental information including at least first image information;
step S902: processing the first image information;
step S903: if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information;
steps S901 to 903 are the same as steps S101 to 103 in embodiment 1, and are not described in detail in this embodiment.
Step S904: intercepting third image information from the first image information, wherein the third image information comprises the target information;
and intercepting the first image information based on the target information to obtain third image information containing the target information.
For example, when the target information is a person, image information of the person is intercepted from the first image information; when the target information is an object for recording information, image information of the object is cut from the first image information.
Step S905: and outputting the first image information and the third image information to a second device, so that the second device displays the first image information at a first scaling and displays the third image information at a second scaling.
Wherein the first scaling is different from the second scaling.
And outputting the intercepted third image information and the first image information to a second device, wherein the second device displays based on the third image information and the first image information.
In a specific implementation, when the first device sends the third image information and the first image information, the first device may send a corresponding scaling control condition to the second device, so that the second device controls a scaling ratio of the third image information and the first image information based on the scaling control condition.
In order to display details of the first image information, that is, to enlarge the third image information, in the second device, the third image information is displayed at a large enlargement ratio and the first image information is displayed at a small enlargement ratio.
Fig. 10 is a schematic diagram of a display interface of the second device, which includes target information and non-target information, wherein the non-target information is an overall image of the environment surrounding the first device, including the person ABC. The target information is a person A, wherein the display interface comprises a first area 1001 and a second area 1002; the first area is used for displaying target information, and the second area is used for displaying non-target information. The display scale of the person A in the first area is larger than that in the second area.
Another schematic display interface diagram of the second device shown in fig. 11 includes target information and non-target information, where the non-target information is an overall image of the environment around the first device, and includes 3 participants and 1 whiteboard, and the target information includes participant Y and whiteboard. Wherein, the display interface comprises a first area 1101-1102 and a second area 1103; the first area 1101 is used for displaying a display whiteboard, the first area 1102 is used for displaying the participant Y, and the second area 1103 is used for displaying non-target information. In the display interface, the display proportion of the participant Y and the white board is larger than that of the participant Y in the second area, and the display proportion of the participant Y and the white board can be different and can be the same. So that the overall image information of the surrounding environment can still be viewed in full perspective without fragmentation.
In specific implementation, the image information obtained by cutting the third image information from the first image information may be used as non-target information (the non-target information is complementary to the target information), and in the subsequent step, the target information and the non-target information are sent to the second device.
Fig. 12 is a schematic diagram of another display interface of the second device, including: the scene corresponding to the schematic diagram includes the participants including the characters ABC, wherein the target information includes the participants A, and the non-target information is information except the target information. In this scenario, the whole image information of the surrounding environment refers to the whole image 1002 of fig. 10, where the display interface includes a first area 1201-; the second area 1203 is used for displaying the participant B, and the first areas 1201-1202 are used for displaying the non-target information, which respectively include the participants a and C. In the display interface, the display scale of the participant A is larger than that of the image in the first area.
It should be noted that, the display modes of the non-target information and the target information may be arranged and displayed according to the distribution mode of the non-target information and the target information in the first image information, but the present invention is not limited to this, and in a specific implementation, the non-target information and the target information may be spliced and displayed according to a preset arrangement mode, and the non-target information and the target information are displayed separately in the display area.
In summary, the information processing method provided in this embodiment further includes: intercepting third image information from the first image information, wherein the third image information comprises the target information; and outputting the first image information and the third image information to a second device, so that the second device displays the first image information at a first scaling and displays the third image information at a second scaling, wherein the first scaling is different from the second scaling. In the scheme, part of content is intercepted from the first image information so as to realize that the target information is sent to the second equipment independently, so that the second equipment can amplify the target information, and a user of the second equipment can conveniently check the target information in detail.
Corresponding to the embodiment of the information processing method provided by the application, the application also provides an embodiment of the electronic equipment applying the information processing method.
Fig. 13 is a schematic structural diagram of an electronic device in accordance with embodiment 1 of the present disclosure, where the electronic device includes the following structures: an image acquisition unit 1301, a processing unit 1302 and a communication unit 1303;
the image acquisition unit 1301 is configured to obtain environmental information around the first device, where the environmental information at least includes first image information;
in a specific implementation, the image acquisition unit may specifically employ a camera.
Wherein, the processing unit 1302 is configured to process the first image information; if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information;
in a specific implementation, the processing unit may adopt a functional structure with information processing capability, such as a Central Processing Unit (CPU).
The communication unit 1303 is configured to send the target information and the non-target information to a second device, where scaling ratios of the target information and the non-target information are different when the target information and the non-target information are output by the second device.
In a specific implementation, the communication unit may perform information transmission in a wired or wireless manner.
Preferably, the processing unit is configured to:
and analyzing whether the first image information contains information meeting a target to obtain an analysis result, wherein the target information meets a preset condition.
Preferably, the processing unit is specifically configured to:
comparing the character image features in the preset database with the first image information, screening to obtain an image area matched with the character image features in the first image, and representing that the first image information comprises target character information according to an analysis result.
Preferably, the electronic device further includes:
the system comprises at least two audio acquisition units, a first audio processing unit and a second audio processing unit, wherein the audio acquisition units are used for acquiring environmental audio information around first equipment;
the processing unit is used for analyzing and obtaining the sounding position of the audio information according to the at least two groups of environmental audio information; analyzing to obtain a second image area corresponding to the sounding position in the first image information based on the setting positions of the at least two audio acquisition devices, the position of an image acquisition device for acquiring the first image information and the first image information; and comparing the character image features in a preset database with the second image area corresponding to the sound production position, analyzing to obtain an image area matched with the character image features in the first image information, and representing that the first image information comprises target character information according to an analysis result.
Preferably, the processing unit is configured to:
analyzing the first image information according to a preset object feature analysis rule to obtain an image area which meets the object feature analysis rule in the first image, wherein an analysis result represents that the first image information comprises target object information;
or
Analyzing the first image information according to a preset object characteristic analysis rule to obtain that the first image comprises at least two first image areas meeting the object characteristic analysis rule, wherein the first image areas correspond to a first object for recording information; the method comprises the steps that based on the fact that a picture of a writing area of any first object at a first moment is different from a picture of the writing area at a second moment, the first object is judged to be a target object, an analysis result represents that first image information comprises target object information, and the first moment is adjacent to the second moment.
Preferably, the processing unit is specifically configured to:
comparing the preset object boundary/region image characteristics with the first image information, and screening to obtain a second image region matched with the object boundary/region image characteristics in the first image;
and combining the second image areas according to the distribution rule of the second image areas in the first image information to obtain target object information.
Preferably, the processing unit is further configured to:
intercepting third image information from the first image information, wherein the third image information comprises the target information;
and outputting the first image information and the third image information to a second device, so that the second device displays the first image information at a first scaling and displays the third image information at a second scaling, wherein the first scaling is different from the second scaling.
In summary, in the electronic device provided in this embodiment, the image surrounding the first device is acquired, the image is analyzed to determine that the image includes the target information and the non-target information, when the target information and the non-target information in the first image information are output by the second device, different scaling ratios are adopted, specifically, the target information is amplified and output, so that a user of the second device can see the amplified target information in the display content, and the effect of video call is improved.
Fig. 14 is a schematic structural diagram of an embodiment 2 of an electronic device provided in the present application, where the electronic device includes the following structure: a camera 1401, a processor 1402, and a communication interface 1403;
the camera 1401 is configured to obtain environment information around a first device, where the environment information at least includes first image information;
wherein, the processor 1402 is configured to process the first image information; if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information;
the communication interface 1403 is configured to send the target information and the non-target information to the second device, where scaling ratios of the target information and the non-target information are different when the target information and the non-target information are output by the second device.
Preferably, the processor is configured to:
and analyzing whether the first image information contains information meeting a target to obtain an analysis result, wherein the target information meets a preset condition.
Preferably, the processor is configured to:
comparing the character image features in the preset database with the first image information, screening to obtain an image area matched with the character image features in the first image, and representing that the first image information comprises target character information according to an analysis result.
Preferably, the processor is configured to:
analyzing according to the at least two groups of environmental audio information to obtain a sounding position for sounding the audio information;
analyzing to obtain a second image area corresponding to the sounding position in the first image information based on the setting positions of the at least two audio acquisition units, the position of an image acquisition unit for acquiring the first image information and the first image information;
and comparing the character image features in a preset database with the second image area corresponding to the sound production position, analyzing to obtain an image area matched with the character image features in the first image information, and representing that the first image information comprises target character information according to an analysis result.
Preferably, the processor is configured to:
analyzing the first image information according to a preset object feature analysis rule to obtain an image area which meets the object feature analysis rule in the first image, wherein an analysis result represents that the first image information comprises target object information;
or
Analyzing the first image information according to a preset object characteristic analysis rule to obtain that the first image comprises at least two first image areas meeting the object characteristic analysis rule, wherein the first image areas correspond to a first object for recording information; the method comprises the steps that based on the fact that a picture of a writing area of any first object at a first moment is different from a picture of the writing area at a second moment, the first object is judged to be a target object, an analysis result represents that first image information comprises target object information, and the first moment is adjacent to the second moment.
Preferably, the processor is configured to:
comparing the preset object boundary/region image characteristics with the first image information, and screening to obtain a second image region matched with the object boundary/region image characteristics in the first image;
and combining the second image areas according to the distribution rule of the second image areas in the first image information to obtain target object information.
Preferably, the processor is further configured to:
intercepting third image information from the first image information, wherein the third image information comprises the target information;
and outputting the first image information and the third image information to a second device, so that the second device displays the first image information at a first scaling and displays the third image information at a second scaling, wherein the first scaling is different from the second scaling.
In summary, in the electronic device provided in this embodiment, the image surrounding the first device is acquired, the image is analyzed to determine that the image includes the target information and the non-target information, when the target information and the non-target information in the first image information are output by the second device, different scaling ratios are adopted, specifically, the target information is amplified and output, so that a user of the second device can see the amplified target information in the display content, and the effect of video call is improved.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the device provided by the embodiment, the description is relatively simple because the device corresponds to the method provided by the embodiment, and the relevant points can be referred to the method part for description.
The previous description of the provided embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features provided herein.

Claims (10)

1. An information processing method comprising:
obtaining environmental information around a first device, the environmental information including at least first image information;
processing the first image information;
if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information;
and when the target information and the non-target information are output by the second equipment, the scaling is different.
2. The method of claim 1, the processing the first image information, comprising:
and analyzing whether the first image information contains information meeting a target to obtain an analysis result, wherein the target information meets a preset condition.
3. The method according to claim 2, wherein analyzing whether the first image information includes target information satisfying a preset condition to obtain an analysis result comprises:
comparing the character image features in the preset database with the first image information, screening to obtain an image area matched with the character image features in the first image, and representing that the first image information comprises target character information according to an analysis result.
4. The method according to claim 2, wherein the environmental information further includes at least two sets of environmental audio information collected by at least two audio collecting units, and the analyzing whether the first image information includes target information meeting a preset condition to obtain an analysis result includes:
analyzing according to the at least two groups of environmental audio information to obtain a sounding position for sounding the audio information;
analyzing to obtain a second image area corresponding to the sounding position in the first image information based on the setting positions of the at least two audio acquisition units, the position of an image acquisition unit for acquiring the first image information and the first image information;
and comparing the character image features in a preset database with the second image area corresponding to the sound production position, analyzing to obtain an image area matched with the character image features in the first image information, and representing that the first image information comprises target character information according to an analysis result.
5. The method according to claim 2, wherein analyzing whether the first image information includes target information satisfying a preset condition to obtain an analysis result comprises:
analyzing the first image information according to a preset object feature analysis rule to obtain an image area which meets the object feature analysis rule in the first image, wherein an analysis result represents that the first image information comprises target object information;
or
Analyzing the first image information according to a preset object characteristic analysis rule to obtain that the first image comprises at least two first image areas meeting the object characteristic analysis rule, wherein the first image areas correspond to a first object for recording information; the method comprises the steps that based on the fact that a picture of a writing area of any first object at a first moment is different from a picture of the writing area at a second moment, the first object is judged to be a target object, an analysis result represents that first image information comprises target object information, and the first moment is adjacent to the second moment.
6. The method according to claim 5, wherein the analyzing the first image information according to a preset object feature analysis rule to obtain an image area satisfying the object feature analysis rule in the first image comprises:
comparing the preset object boundary/region image characteristics with the first image information, and screening to obtain a second image region matched with the object boundary/region image characteristics in the first image;
and combining the second image areas according to the distribution rule of the second image areas in the first image information to obtain target object information.
7. The method of claim 1, further comprising:
intercepting third image information from the first image information, wherein the third image information comprises the target information;
and outputting the first image information and the third image information to a second device, so that the second device displays the first image information at a first scaling and displays the third image information at a second scaling, wherein the first scaling is different from the second scaling.
8. An electronic device, comprising:
the device comprises an image acquisition unit, a processing unit and a processing unit, wherein the image acquisition unit is used for acquiring environmental information around first equipment, and the environmental information at least comprises first image information;
a processing unit for processing the first image information; if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information;
and the communication unit is used for sending the target information and the non-target information to the second equipment, and when the target information and the non-target information are output by the second equipment, the scaling is different.
9. The electronic device of claim 8, further comprising:
the system comprises at least two audio acquisition units, a first audio processing unit and a second audio processing unit, wherein the audio acquisition units are used for acquiring environmental audio information around first equipment;
the processing unit is used for analyzing and obtaining the sounding position of the audio information according to the at least two groups of environmental audio information; analyzing to obtain a second image area corresponding to the sounding position in the first image information based on the setting positions of the at least two audio acquisition devices, the position of an image acquisition device for acquiring the first image information and the first image information; and comparing the character image features in a preset database with the second image area corresponding to the sound production position, analyzing to obtain an image area matched with the character image features in the first image information, and representing that the first image information comprises target character information according to an analysis result.
10. An electronic device, comprising:
the camera is used for obtaining environmental information around the first equipment, and the environmental information at least comprises first image information;
a processor for processing the first image information; if the analysis result shows that the first image information comprises target information, determining the target information and non-target information according to the first image information;
and the communication interface is used for sending the target information and the non-target information to the second equipment, and when the target information and the non-target information are output by the second equipment, the scaling is different.
CN201911415740.0A 2019-12-31 2019-12-31 Information processing method and electronic equipment Pending CN111093028A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911415740.0A CN111093028A (en) 2019-12-31 2019-12-31 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911415740.0A CN111093028A (en) 2019-12-31 2019-12-31 Information processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN111093028A true CN111093028A (en) 2020-05-01

Family

ID=70398371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911415740.0A Pending CN111093028A (en) 2019-12-31 2019-12-31 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111093028A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111915637A (en) * 2020-05-14 2020-11-10 五八有限公司 Picture display method and device, electronic equipment and storage medium
CN112118414A (en) * 2020-09-15 2020-12-22 深圳市健成云视科技有限公司 Video session method, electronic device, and computer storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1774065A (en) * 2004-11-09 2006-05-17 日本电气株式会社 Videophone
CN101951493A (en) * 2010-09-25 2011-01-19 中兴通讯股份有限公司 Mobile terminal and method for partially amplifying far-end images in video call thereof
US20120262537A1 (en) * 2011-04-18 2012-10-18 Baker Mary G Methods and systems for establishing video conferences using portable electronic devices
CN105320270A (en) * 2014-07-18 2016-02-10 宏达国际电子股份有限公司 Method for performing a face tracking function and an electric device having the same
CN106161946A (en) * 2016-08-01 2016-11-23 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN107770484A (en) * 2016-08-19 2018-03-06 杭州海康威视数字技术股份有限公司 A kind of video monitoring information generation method, device and video camera
CN108933915A (en) * 2017-05-26 2018-12-04 和硕联合科技股份有限公司 Video conference device and video conference management method
CN110389737A (en) * 2019-06-27 2019-10-29 苏州佳世达电通有限公司 Display system and its display methods

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1774065A (en) * 2004-11-09 2006-05-17 日本电气株式会社 Videophone
CN101951493A (en) * 2010-09-25 2011-01-19 中兴通讯股份有限公司 Mobile terminal and method for partially amplifying far-end images in video call thereof
US20120262537A1 (en) * 2011-04-18 2012-10-18 Baker Mary G Methods and systems for establishing video conferences using portable electronic devices
CN105320270A (en) * 2014-07-18 2016-02-10 宏达国际电子股份有限公司 Method for performing a face tracking function and an electric device having the same
CN106161946A (en) * 2016-08-01 2016-11-23 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN107770484A (en) * 2016-08-19 2018-03-06 杭州海康威视数字技术股份有限公司 A kind of video monitoring information generation method, device and video camera
CN108933915A (en) * 2017-05-26 2018-12-04 和硕联合科技股份有限公司 Video conference device and video conference management method
CN110389737A (en) * 2019-06-27 2019-10-29 苏州佳世达电通有限公司 Display system and its display methods

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111915637A (en) * 2020-05-14 2020-11-10 五八有限公司 Picture display method and device, electronic equipment and storage medium
CN111915637B (en) * 2020-05-14 2024-03-12 五八有限公司 Picture display method and device, electronic equipment and storage medium
CN112118414A (en) * 2020-09-15 2020-12-22 深圳市健成云视科技有限公司 Video session method, electronic device, and computer storage medium

Similar Documents

Publication Publication Date Title
US10983664B2 (en) Communications interface and a communications method, a corresponding computer program, and a corresponding registration medium
US9894320B2 (en) Information processing apparatus and image processing system
US11290598B2 (en) Teleconference system and terminal apparatus
US7756675B2 (en) Group-determination-table generating device and method
US8902280B2 (en) Communicating visual representations in virtual collaboration systems
CN108322474B (en) Virtual reality system based on shared desktop, related device and method
US20160330406A1 (en) Remote communication system, method for controlling remote communication system, and storage medium
CN111093028A (en) Information processing method and electronic equipment
WO2023016107A1 (en) Remote interaction method, apparatus and system, and electronic device and storage medium
CN110673811B (en) Panoramic picture display method and device based on sound information positioning and storage medium
CN111163280B (en) Asymmetric video conference system and method thereof
US9131109B2 (en) Information processing device, display control system, and computer program product
US11756302B1 (en) Managing presentation of subject-based segmented video feed on a receiving device
CN114268813A (en) Live broadcast picture adjusting method and device and computer equipment
JP6700672B2 (en) Remote communication system, its control method, and program
CN115066907A (en) User terminal, broadcasting apparatus, broadcasting system including the same, and control method thereof
CN114531564B (en) Processing method and electronic equipment
KR102571677B1 (en) AI Studio for Online Lectures
US11949727B2 (en) Organic conversations in a virtual group setting
US20230247383A1 (en) Information processing apparatus, operating method of information processing apparatus, and non-transitory computer readable medium
US20230388447A1 (en) Subject-based smart segmentation of video feed on a transmitting device
WO2022201944A1 (en) Distribution system
CN109862419B (en) Intelligent digital laser television interaction method and system
CN114531564A (en) Processing method and electronic equipment
US20220312069A1 (en) Online video distribution support method and online video distribution support apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200501