CN116320576A - View object processing method, device, equipment and medium - Google Patents

View object processing method, device, equipment and medium Download PDF

Info

Publication number
CN116320576A
CN116320576A CN202111559383.2A CN202111559383A CN116320576A CN 116320576 A CN116320576 A CN 116320576A CN 202111559383 A CN202111559383 A CN 202111559383A CN 116320576 A CN116320576 A CN 116320576A
Authority
CN
China
Prior art keywords
view
view object
extended
basic
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111559383.2A
Other languages
Chinese (zh)
Inventor
欧阳铨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202111559383.2A priority Critical patent/CN116320576A/en
Publication of CN116320576A publication Critical patent/CN116320576A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The present disclosure relates to a view object processing method, apparatus, device and medium. The method comprises the following steps: in the display process of the basic view object, if a first preset trigger event is detected, the extended view object corresponding to the first preset trigger event is displayed in a superposition mode; determining an information analysis result based on the first view information of the base view object and the second view information of the extended view object; and if the information analysis result reaches a preset hiding condition, hiding the basic view object. According to the embodiment of the disclosure, the power consumption of the equipment can be reduced, and the operation smoothness of the client can be improved.

Description

View object processing method, device, equipment and medium
Technical Field
The present disclosure relates to the field of information processing technologies, and in particular, to a method, an apparatus, a device, and a medium for processing a view object.
Background
In the scenes of online teaching, online meeting, live broadcasting and the like, a visualized object (simply called a basic view object) is displayed by default, for example, in the scene of online teaching, courseware is always displayed by default.
When a user performs a certain operation to trigger the display of other visual objects (referred to as extended view objects for short) except the basic view object, the view size of the extended view object is generally adjusted according to the view size of the basic view object, and the extended view object with the adjusted view size is displayed above the basic view object in an overlaying manner so as to cover the basic view object.
The above view object processing method, although making the basic view object invisible to the user, may cause excessive consumption of operation resources, and easily cause client-side blocking, especially when running on a device with poor performance, it easily causes the client-side to crash and exit abnormally.
Disclosure of Invention
In order to solve the technical problems, the present disclosure provides a method, an apparatus, a device, and a medium for processing a view object.
In a first aspect, the present disclosure provides a view object processing method, including:
in the display process of the basic view object, if a first preset trigger event is detected, the extended view object corresponding to the first preset trigger event is displayed in a superposition mode;
determining an information analysis result based on the first view information of the base view object and the second view information of the extended view object;
and if the information analysis result reaches a preset hiding condition, hiding the basic view object.
In some embodiments, the determining the information analysis result based on the first view information of the base view object and the second view information of the extended view object includes:
determining a first view area of the base view object based on a view size in the first view information;
Determining a second view area of the extended view object based on a view size in the second view information;
determining the view area proportion of the second view area to the first view area as the information analysis result;
correspondingly, if the information analysis result reaches a preset hiding condition, hiding the basic view object includes:
and hiding the basic view object if the view area ratio reaches a preset ratio threshold value.
In some embodiments, the hiding the base view object comprises:
stopping the rendering operation of the basic view object, and replacing the basic view object by using the rendering image when the rendering is stopped;
or stopping the rendering operation of the basic view object, and replacing the basic view object by a preset static image;
or destroying the basic view object.
In some embodiments, after the hiding the base view object if the information analysis result reaches a preset hiding condition, the view object processing method further includes:
and if a second preset trigger event for the control view object is detected, redisplaying the basic view object, and processing the extended view object according to a view display mode corresponding to the second preset trigger event.
In some embodiments, the redisplaying the base view object includes:
if the rendered image or the preset static image corresponding to the basic view object is detected, restarting the rendering operation of the basic view object, and replacing the rendered image or the preset static image with the rendered basic view object;
or if the basic view object is not detected, reloading and displaying the basic view object.
In some embodiments, if the first preset trigger event is detected, displaying the extended view object corresponding to the first preset trigger event in a superimposed manner includes:
if at least two first preset trigger events are detected, displaying the extended view objects corresponding to each first preset trigger event in a superposition mode;
accordingly, the determining the information analysis result based on the first view information of the base view object and the second view information of the extended view object includes:
determining each of the information analysis results based on the first view information and the second view information of at least one of the extended view objects;
correspondingly, if the information analysis result reaches a preset hiding condition, hiding the basic view object includes:
And if any information analysis result reaches the preset hiding condition, hiding the basic view object.
In some embodiments, if the first preset trigger event is detected, displaying the extended view object corresponding to the first preset trigger event in a superimposed manner includes:
if at least two first preset trigger events are detected, determining a target extended view object from the control view objects based on the second view information of the extended view objects corresponding to the first preset trigger events;
and displaying the target extension view object in a superposition way.
In some embodiments, the first preset trigger event includes at least one of activating a high-speed camera, activating a screen sharing function, activating a game function, activating an interactive function, and activating a character video function.
In a second aspect, the present disclosure provides a view object processing apparatus, the apparatus comprising:
the display module of the extended view object is used for displaying the extended view object corresponding to the first preset trigger event in a superposition manner if the first preset trigger event is detected in the display process of the basic view object;
an information analysis result determining module, configured to determine an information analysis result based on the first view information of the base view object and the second view information of the extended view object;
And the basic view object hiding module is used for hiding the basic view object if the information analysis result reaches a preset hiding condition.
In some embodiments, the information analysis result determination module is specifically configured to:
determining a first view area of the base view object based on a view size in the first view information;
determining a second view area of the extended view object based on a view size in the second view information;
determining the view area proportion of the second view area to the first view area as the information analysis result;
accordingly, the base view object hiding module is specifically configured to:
and hiding the basic view object if the view area ratio reaches a preset ratio threshold value.
In some embodiments, the base view object hiding module is further to:
stopping the rendering operation of the basic view object, and replacing the basic view object by using the rendering image when the rendering is stopped;
or stopping the rendering operation of the basic view object, and replacing the basic view object by a preset static image;
or destroying the basic view object.
In some embodiments, the view object processing apparatus further comprises a base view object display module for:
after hiding the basic view object if the information analysis result reaches a preset hiding condition, redisplaying the basic view object if a second preset trigger event for the extended view object is detected, and processing the extended view object according to a view display mode corresponding to the second preset trigger event.
In some embodiments, the base view object display module is specifically configured to:
if the rendered image or the preset static image corresponding to the basic view object is detected, restarting the rendering operation of the basic view object, and replacing the rendered image or the preset static image with the rendered basic view object;
or if the basic view object is not detected, reloading and displaying the basic view object.
In some embodiments, the extended view object display module is specifically configured to:
if at least two first preset trigger events are detected, displaying the extended view objects corresponding to each first preset trigger event in a superposition mode;
Correspondingly, the information analysis result determining module is specifically configured to:
determining each of the information analysis results based on the first view information and the second view information of at least one of the extended view objects;
accordingly, the base view object hiding module is specifically configured to:
and if any information analysis result reaches the preset hiding condition, hiding the basic view object.
In some embodiments, the extended view object display module is specifically configured to:
if at least two first preset trigger events are detected, determining a target extended view object from the extended view objects based on the second view information of the extended view objects corresponding to the first preset trigger events;
and displaying the target extension view object in a superposition way.
In some embodiments, the first preset trigger event includes at least one of activating a high-speed camera, activating a screen sharing function, activating a game function, activating an interactive function, and activating a character video function.
In a third aspect, the present disclosure provides an electronic device comprising:
a processor;
a memory for storing executable instructions;
The processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the view object processing method described in any embodiment of the present disclosure.
In a fourth aspect, the present disclosure provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the view object processing method described in any embodiment of the present disclosure.
According to the view object processing method, device, equipment and medium, in the display process of the basic view object, under the condition that a first preset trigger event is detected, the extended view object corresponding to the first preset trigger event is overlapped and displayed on the basic view object, an information analysis result is determined based on the first view information of the basic view object and the second view information of the extended view object, the basic view object is hidden under the condition that the information analysis result reaches a preset hiding condition, the process of adjusting the view size of the extended view object under the condition that the view is required to be covered is omitted, resource consumption is reduced to a certain extent, the basic view object is hidden in the display process of the extended view object, the resource consumption required by the basic view object to be continuously displayed in a view level is saved, the resource consumption of the equipment is further reduced, and accordingly the running smoothness of a client is improved, and the use experience of a user is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a flow chart of a view object processing method according to an embodiment of the present disclosure;
fig. 2 is a display schematic diagram of a view object overlay display according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating another view object processing method according to an embodiment of the present disclosure;
FIG. 4 is a block diagram of a listener and functional modules according to an embodiment of the disclosure;
fig. 5 is a display schematic diagram of another view object overlay display provided in an embodiment of the present disclosure;
FIG. 6 is a display schematic diagram of yet another view object overlay display provided by an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a view object processing apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Currently, in scenes such as online teaching, online meeting, live broadcasting and the like, a superposition display requirement among various view objects exists. For example, in an on-line teaching scenario, courseware is always displayed by default (referred to as a base view object), and the teacher may enable a high-speed camera or turn on screen sharing as required to additionally display some content in a new view (referred to as an extended view object) to assist in teaching. Then the extended view object needs to be displayed superimposed over the base view object. In the related art, the view size of the extended view object is generally adjusted according to the view size of the base view object, and the extended view object with the adjusted view size is displayed over the base view object to cover the base view object. However, the above process of resizing the view, and the process of continuously displaying the base view object under the extended view object, all cause excessive consumption of running resources, thereby causing the client to crash and even exit.
Based on the above situation, the embodiment of the disclosure provides a view object processing scheme, so as to display an extended view object and hide the extended view object under the condition that a view is required to be covered and a display relationship between the extended view object and the base view object is judged to meet a preset hiding condition, thereby saving resource consumption required for adjusting the view size and continuously displaying the base view object, and improving the operation smoothness of a client.
The view object processing method provided by the embodiment of the disclosure can be suitable for a scene with multi-view switching display. For example, the method can be applied to the switching display of courseware and other view objects in an online teaching scene; the method can also be applied to the switching display of conference subjects/conference backgrounds and other view objects in an online conference scene; the method can also be applied to the switching display of live main content (such as a view corresponding to a host or introduced object) and other view objects (such as interactive related views) in the live process in an online live scene, and the like.
The above-described view object processing method may be performed by a view object processing apparatus, which may be implemented in software and/or hardware, and which may be integrated in an electronic device having a display function. The electronic device may include, but is not limited to, mobile terminals such as smartphones, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), notebook computers, in-vehicle terminals (e.g., in-vehicle navigation terminals), wearable devices, etc., as well as stationary terminals such as digital televisions TV, desktop computers, smart home devices, etc.
Fig. 1 shows a flowchart of a view object processing method according to an embodiment of the present disclosure. As shown in fig. 1, the view object processing method may include the steps of:
s110, in the display process of the basic view object, if a first preset trigger event is detected, the extended view object corresponding to the first preset trigger event is displayed in a superposition mode.
The basic view object refers to a basic view object displayed in the running process of the client, and the basic view object can be a visual object displayed by a first default in the running process, can be a visual object with higher display frequency in the running process, and can be a visual object with higher importance in the running process. Taking on-line teaching as an example, a basic view object is a teaching courseware.
The first preset trigger event refers to a preset interaction event that is repulsed from the display of the base view object, and is used to trigger the display of other view objects (i.e., extended view objects) different from the base view object. Illustratively, the first preset trigger event includes at least one of a start of a high-speed camera, a start of a screen sharing function, a start of a game function, a start of an interactive function, and a start of a character video function. The high-speed camera is used for transmitting the resources such as images or videos obtained by external shooting/scanning to the client. The startup screen sharing function is a function for exposing resources shared by a certain user to participating users of the client. The start game function is a function for presenting game screens to participating users and providing game interactions. The start-up interactive function is a function for presenting a screen of some non-game interactive activity to a participating user and providing interaction. The start person video function is a function of turning on a camera to present a video stream of a participating user.
Specifically, the electronic device always displays the basic view object by default in the running process of the application program corresponding to the client. For example, the electronic device always displays teaching courseware during the running of the online teaching application. In the process of continuously displaying the basic view object, if the electronic equipment detects a first preset trigger event, the electronic equipment determines an extended view object corresponding to the first preset trigger event. For example, when the electronic device displays a teaching courseware, the user triggers the screen sharing function, and then the electronic device can detect a signal that the screen sharing function is triggered, and at this time, the electronic device can acquire the shared screen content in real time and display the screen content in a new view, so that an extended view object is determined. The electronic device then displays the extended view object in superposition, i.e. the electronic device displays both the base view object and the extended view object.
As shown in fig. 2, a base view object 202 and an extended view object 201 are displayed superimposed in the electronic device 200. Because the display time of the extended view object 201 is late, the electronic device displays the extended view object 201 in the upper layer view, and the user can first see the extended view object 201 corresponding to the first preset trigger event triggered by the user. If the view size of the extended view object 201 is smaller than the view size of the base view object 202, the user can see the non-occluded portion of the base view object 202 at the same time. The electronic device 200 may also have an "initiating user information area" and a "participating user information area" displayed therein for displaying related information (e.g., nicknames, avatars, etc.) of initiating users (e.g., teachers, conference moderators, anchor, etc.) and related information of participating users (e.g., students, conference participants, fans, etc.), respectively.
S120, determining an information analysis result based on the first view information of the basic view object and the second view information of the extended view object.
The view information refers to attribute information related to the view, and may include, for example, view size, view display priority, view type, and the like. The information analysis result is a result obtained by analyzing the view relation of each view information.
Specifically, the business requirement of displaying at least two view objects in a superimposed manner may at least include two cases, one of which is that the extended view object and the base view object need to be displayed simultaneously, and the other is that only one of the extended view object and the base view object is displayed, and both cases can be judged according to view information of the view objects. Therefore, after the extended view object and the base view object are displayed in a superimposed manner, the electronic device determines which of the two cases the display relationship of the two view objects belongs to, based on the view information of the base view object (i.e., the first view information) and the view information of the extended view object (i.e., the second view object).
In specific implementation, the electronic device may calculate, according to the size relationship between the view size in the first view information and the view size in the second view information, the shielding degree of the two view objects as an information analysis result, and further determine, according to the information analysis result, the display relationship of the two view objects, so as to determine whether to hide the base view object. For example, when the occlusion level is too high, it is determined that only the extended view object is displayed and the base view object is hidden; and when the occlusion degree is too low, determining to display two view objects simultaneously.
The electronic device may also determine an information analysis result according to a relationship between the view display priority in the first view information and the view display priority in the second view information, so as to determine a display relationship between the two view objects. For example, if the display priorities of the two views are the same, then it is determined that the two view objects are displayed simultaneously; conversely, if the view display priority of the extended view object is higher than the view display priority of the base view object, only the extended view object is displayed and the base view object is hidden.
In some embodiments, determining the occlusion degree of the extended view object and the base view object based on the view size may be implemented as: determining a first view area of the base view object based on the view size in the first view information; determining a second view area of the extended view object based on the view size in the second view information; and determining the view area proportion of the second view area to the first view area as an information analysis result.
Specifically, referring to fig. 2, the electronic device calculates a view area (i.e., a first view area, such as a diagonal fill area in fig. 2) of the base view object 202 from view sizes (i.e., a view length and a view width) in the first view information, and calculates a view area (i.e., a second view area, such as a horizontal line fill area in fig. 2) of the extended view object 201 from view lengths and view widths in the second view information. Then, the electronic device calculates the ratio of the second view area of the extended view object 201 displayed at the upper layer to the first view area of the base view object 202, and obtains the view area ratio as the above information analysis result. Therefore, the view area proportion which is simple to calculate and has quantitative property can be used as an information analysis result, and a more accurate data basis is provided for the subsequent judgment of whether the basic view object is hidden or not.
S130, if the information analysis result reaches a preset hiding condition, hiding the basic view object.
The preset hiding conditions are preset critical conditions for judging hidden basic view objects. For example, when the information analysis result is the shielding degree, the preset hiding condition may be a critical value of the shielding degree; for another example, when the information analysis result is the priority level, the preset hiding condition may be a setting of whether the priorities are the same.
Specifically, the electronic device compares the obtained information analysis result with a preset hiding condition. If the information analysis result does not reach the preset hiding condition, the electronic device can display the basic view object and the extended view object at the same time according to the original view display logic of the client. If the information analysis result reaches the preset hiding condition, the electronic equipment hides the basic view object. For example, the electronic device modifies the display attribute of the base view object, modifying its displayed attribute value to a non-displayed attribute value. In this way, the electronic device displays only the extended view object, reducing the running resources consumed by the display of the base view object in the view hierarchy that is not visible to the user.
In some embodiments, when the information analysis result is determined based on the view size, S130 may be implemented as: and hiding the basic view object if the view area ratio reaches a preset ratio threshold value.
The preset scale threshold is a preset critical value of the view area scale of the hidden base visual view, and for example, the preset scale threshold may be set to 95%.
Specifically, the electronic device compares the calculated view area ratio with a preset ratio threshold. If the view area scale is less than the preset scale threshold, the base view object and the extended view object are displayed simultaneously. If the view area scale is greater than or equal to the preset scale threshold, only the extended view object is displayed while the base view object is hidden. Therefore, whether the basic view object is hidden can be judged more accurately based on the quantified view area proportion and the preset proportion threshold, and the display relationship of the two view objects can be adjusted by adjusting the preset proportion threshold, so that whether the basic view object is hidden can be controlled more flexibly, the consumption of operation resources in the display process of the view object can be controlled more accurately and flexibly, and an implementation basis is provided for the electronic equipment which is smoothly operated at different performances by the client.
According to the view object processing method provided by the embodiment, in the process of displaying the basic view object, under the condition that the first preset trigger event is detected, the extended view object corresponding to the first preset trigger event is overlapped and displayed on the basic view object, the information analysis result is determined based on the first view information of the basic view object and the second view information of the extended view object, the basic view object is hidden under the condition that the information analysis result reaches the preset hiding condition, the process of adjusting the view size of the extended view object under the condition that the view is required to be hidden is omitted, the resource consumption is reduced to a certain extent, the basic view object is hidden in the process of displaying the extended view object, the resource consumption required by the basic view object to be continuously displayed in the view level is saved, and the resource consumption of equipment is further reduced, so that the operation smoothness of a client is improved, and the use experience of a user is improved.
Fig. 3 is a flowchart of another view object processing method according to an embodiment of the present disclosure. The view object processing method realizes the comprehensive processing of the basic visual object and a plurality of extended view objects on the basis of the embodiments.
In the related art, monitors are independently arranged between functional modules corresponding to each first preset trigger event, and the electronic device independently detects the first preset trigger event through each monitor and executes the hidden judgment logic of the basic view object. For example, the screen sharing function corresponds to setting the screen sharing module, the game function corresponds to setting the game module, the interactive function corresponds to setting the interactive module, and the character video function corresponds to setting the character video module. When a monitor corresponding to a screen sharing module in the electronic equipment detects a first preset trigger event, the monitor independently executes the view object processing method described in each embodiment; when the monitor corresponding to the game module in the electronic device detects the first preset trigger event, the monitor also independently executes the view object processing method described in the above embodiments. This has the following problems: on the one hand, each functional module independently processes the display logic of the extended view object and the hidden logic of the basic view object, so that a large amount of repeated codes exist in the client program; on the other hand, the judgment logic among the functional modules is independent, comprehensive analysis and judgment cannot be performed, and judgment conflict of whether the basic view object is hidden is easily caused. For example, the judgment result generated by exiting the screen sharing function is a display basic view object, and the judgment result generated by starting the interactive function module is a hidden basic view object, and the two judgment results are in conflict.
Based on the above situation, in the embodiment of the present disclosure, a unified monitor may be set to monitor a first preset trigger event of a different functional module, and execute the judging logic of whether the basic view object is hidden in a unified manner. As shown in fig. 4, the high-speed camera module 401, the screen sharing module 402, the game module 403, the interaction module 404 and the character video module 405 are all connected with the monitor 400 in a communication manner, and the monitor 400 uniformly monitors a first preset trigger event generated by triggering the functional modules and executes a judgment logic of whether the basic view object is hidden. Therefore, a large number of repeated codes in the client program can be avoided, the state information of each functional module can be summarized, unified judgment logic can be executed, and the success rate of processing the basic view object is improved.
Referring to fig. 3, the view object processing method specifically includes:
and S310, in the display process of the basic view object, if at least two first preset trigger events are detected, the extended view object corresponding to each first preset trigger event is displayed in a superposition mode.
Specifically, when the monitor in the electronic device monitors a first preset trigger event corresponding to at least one functional module, the extended view object corresponding to each first preset trigger event may be displayed in an overlapping manner on an upper layer of the base view object. As for the display upper and lower layer relationship of each extended view object, it may be determined at random, or may be determined according to the upper and lower relationship of the view display priority of each extended view object, or the importance degree of each extended view object. For example, the extended view object having the highest priority/importance of view display is displayed at the uppermost layer closest to the screen.
S320, determining each information analysis result based on the first view information and the second view information of at least one extended view object.
Specifically, in the case of displaying a plurality of extended view objects superimposed on the base view object, the electronic device may calculate and obtain a corresponding information analysis result from the first view information of the base view object and the second view information of each extended view object.
In some embodiments, the electronic device can determine whether to hide the base view object based on a degree of occlusion between the base view object and each of the extended view objects. That is, the electronic device determines an information analysis result corresponding to each extended view object based on the first view information and the second view information of each extended view object.
Specifically, for each extended view object, the electronic device may determine an information analysis result between the first view information and the second view information of the extended view object according to the process description of S120.
In other embodiments, the electronic device may determine whether to hide the base view object based on a degree of occlusion between the base view object and the plurality of extended view objects. That is, the electronic device determines an information analysis result corresponding to at least two extended view objects based on the first view information and the second view information of the at least two extended view objects.
Specifically, for the case that the shielding degree of a single extended view object to the base view object does not meet the preset shielding condition and the base view object is determined to be not shielded, in this example, whether the plurality of extended view objects shield the base view object and the shielding of the base view object is triggered can be further determined, that is, the shielding degree of the plurality of extended view objects to the base view object is further calculated, and further whether the shielding degree meets the preset shielding condition is determined.
For example, the manner of calculating the shielding degree of the plurality of extended view objects to the basic view object may be to determine a rectangular union of the plurality of extended view objects, and calculate the ratio of the area of the rectangular union in the first view area corresponding to the basic view object, to obtain the view area ratio corresponding to the plurality of extended view objects, as the information analysis result of the plurality of extended view objects.
Referring to fig. 5, a base view object 502 and 3 extended view objects 501 are displayed superimposed in an electronic device 500. The electronic device 500 may determine a rectangular union 503 (illustrated by the range outlined in bold lines in fig. 5) according to the coordinates of the 3 extended view objects 501, and calculate the area of the rectangular union 503 according to the lengths of the sides of the rectangular union 503. Then, the electronic device 500 calculates the ratio of the area to the first view area, so as to obtain the view area ratio corresponding to the 3 extended view objects.
It should be understood that the electronic device may also calculate, according to the above procedure, the view area ratio corresponding to any two extended view objects 501 as the information analysis result of the plurality of extended view objects.
The method of calculating the shielding degree of the plurality of extended view objects to the basic view object may also be that the area of the largest circumscribing rectangle of the at least two extended view objects is calculated from the second view information of the plurality of extended view objects, and the ratio of the area of the largest circumscribing rectangle to the first view area corresponding to the basic view object is calculated, so as to obtain the view area ratio corresponding to the plurality of extended view objects, and the view area ratio is used as the information analysis result of the plurality of extended view objects.
Referring to fig. 6, a base view object 602 and 3 extended view objects 601 are displayed superimposed in an electronic device 600. The electronic device 600 may determine a corresponding maximum bounding rectangle 603 (illustrated by a thick solid line in fig. 6) according to the coordinates of the 3 extended view objects 601, and calculate the area of the maximum bounding rectangle 603 according to the lengths of the sides of the maximum bounding rectangle 603. Then, the electronic device 600 calculates the ratio of the area of the maximum circumscribed rectangle 603 to the first view area, so as to obtain the view area ratio corresponding to the 3 extended view objects.
S330, if any information analysis result reaches a preset hiding condition, hiding the basic view object.
Specifically, the electronic device compares each of the information analysis results obtained above with a preset hiding condition. And if at least one comparison result is that the information analysis result reaches the preset hiding condition, executing the operation of hiding the basic view object.
For example, the electronic device may compare the information analysis result corresponding to each extended view object with a preset hiding condition to determine whether any information analysis result reaches the preset hiding condition. If so, then the operation of hiding the base view object is performed. If not, the electronic device further compares the obtained information analysis results corresponding to the plurality of extended view objects with preset hiding conditions. And if the information analysis results corresponding to the plurality of extended view objects meet the preset hiding conditions, executing the operation of hiding the basic view objects. If the information analysis results corresponding to each of the plurality of extended view objects do not meet the preset hiding conditions, the base view object is not hidden.
It should be noted that, in the case of displaying multiple extended view objects, the operations of S120 to S130 in the above embodiments may also refer to the determination of whether each displayed extended view object is hidden, so as to achieve the result that only one extended view object is displayed finally, so as to reduce the consumption of operation resources and improve the smoothness of the client.
According to the view object processing method provided by the embodiment of the disclosure, in the display process of the basic view object, when at least two first preset trigger events are detected, the extended view object corresponding to each first preset trigger event can be displayed in a superposition mode; determining each information analysis result based on the first view information and the second view information of the at least one extended view object; if any information analysis result reaches a preset hiding condition, hiding the basic view object; the processing of the concurrent display of a plurality of extended view objects is realized, the view object processing flow is perfected, the problem that a great amount of repeated codes and basic view objects are in conflict with processing logic in a client program is avoided, the success rate of processing the basic view objects is improved, and the resource consumption of equipment is further reduced.
In another embodiment provided by the present disclosure, S110 may be implemented as: if at least two first preset trigger events are detected, determining a target extended view object from the extended view objects based on second view information of the extended view objects corresponding to the first preset trigger events; and superposing and displaying the target extension view object.
Specifically, when the electronic device detects a plurality of first preset trigger events, an extended view object with higher view display priority in each extended view object can be determined as a target extended view object; the extended view object with the largest view area can be determined as the target extended view object; the extended view object with higher user participation interaction degree can be determined as the target extended view object. The electronic device then displays the target extended view object superimposed on top of the base view object. Thus, one extended view object and one base view object are still displayed in the electronic device. Then, in this example, the subsequent process of judging whether to hide the base view object can be seen from S120 to S130. By the arrangement, the problem that the electronic equipment displays a plurality of extended view objects at the same time to cause display screen confusion and the problem that operation resources are excessively consumed can be avoided, the concurrence of a plurality of first preset trigger events can be compatible, the resource consumption can be reduced, and the operation smoothness of the client is improved.
In some embodiments, the step of hiding the base view object may further implement further processing of the base view object, so as to achieve an effect of further reducing power consumption.
In an example, hiding the base view object includes: and stopping the rendering operation of the base view object, and replacing the base view object by the rendered image when the rendering is stopped.
Specifically, the electronic device hides the base view object, only reduces the display operation of the base view object, and the base view object still performs the rendering operation in the background. Therefore, the rendering operation of the base view object is further stopped in this example. At this time, a still image, i.e., a rendered image, at the time of stopping rendering is generated. The electronic device replaces the base view object with the rendered image as content in the view corresponding to the base view object. Therefore, under the condition of keeping the object content of the basic view, the memory occupation of the corresponding view can be reduced, and the power consumption of the device can be reduced.
In another example, hiding the base view object includes: and stopping the rendering operation of the basic view object, and replacing the basic view object by using the preset static image.
The preset static image is a preset image in the basic view object, for example, may be a first frame of rendered image of the basic view object, or may be a frame of rendered image which can reflect the key content of the basic view object.
Specifically, in this example, the rendering operation on the base view object is also stopped, but the content in the view corresponding to the base view object is replaced with the preset still image. By the arrangement, memory occupation of corresponding views can be reduced and equipment power consumption can be reduced under the condition that the object content of the basic view is reserved.
In yet another example, hiding the base view object includes: destroying the base view object.
Specifically, this example directly destroys the base view object after hiding the base view object in order to further reduce its occupancy of device memory. Thus, rendering operation on the basic view object does not exist, and the power consumption of the device is reduced.
According to the embodiments, the online teaching application program running on the same device is tested, and the following test results are obtained: hiding the teaching courseware, stopping rendering the teaching courseware, reducing the CPU occupation ratio of the equipment by about 10 percent, and reducing the memory occupation by about 30 MB; on the basis of hiding teaching courseware, the teaching courseware is destroyed, the CPU (Central processing Unit) occupation ratio of the equipment can be reduced by about 30%, and the memory occupation can be reduced by about 200 MB.
In some embodiments, after hiding the base view object, the view object processing method further comprises: and if the second preset trigger event corresponding to the first preset trigger event is detected, redisplaying the basic view object, and processing the extended view object according to the view display mode corresponding to the second preset trigger event.
The second preset trigger event is a preset event which is executed on the extended view object and is used for triggering the redisplay of the basic view object. The second preset trigger event may be an event corresponding to/matching the first preset trigger event. For example, the first preset trigger event may be to start the high-speed camera, start the screen sharing function, start the game function, start the interactive function, and start the character video function, and the second preset trigger event may be to close the high-speed camera, close the screen sharing function, close the game function, close the interactive function, and close the character video function, respectively. It should be noted that, the second preset trigger event may be an event for narrowing the corresponding extended view object (i.e., a narrowing event), an event for adjusting the display priority of the corresponding extended view object downward (i.e., a lowering priority event), or the like, in addition to an event for closing the corresponding extended view object (i.e., a closing event).
Specifically, the electronic device performs at least an operation of redisplaying the base view object when detecting a second preset trigger event for a certain extended view object. For example, the electronic device re-modifies the display attribute of the base view object from a non-displayed attribute value to a displayed attribute value. And then, the electronic equipment determines the view display mode of the corresponding extended view object according to the related information of the second preset trigger event, and processes the extended view object according to the view display mode.
For example, if the second preset trigger event is a closing event, the electronic device may determine that the view display manner is a closing view object, and then the electronic device closes an extended view object corresponding to the second preset trigger event, i.e. hides the extended view object.
For another example, if the second preset trigger event is a zoom-out event, the electronic device may determine that the view display mode is a zoom-out view object, and then the electronic device zooms out the extended view object corresponding to the second preset trigger event according to the zoom-out degree information (the zoomed-out view size or the zoomed-out scale, etc.) corresponding to the zoom-out event. And then, when the electronic device judges that the view size of the reduced extended view object is smaller than or equal to the view size of the basic view object or that the view area ratio of the view area of the reduced extended view object to the first view area is smaller than or equal to another preset ratio threshold, the electronic device executes hiding operation on the extended view object corresponding to the second preset trigger event. The further preset proportional threshold here is smaller than the previously described preset proportional threshold. Otherwise, if the electronic device determines that the view size of the reduced extended view object is greater than the view size of the base view object, or determines that the ratio of the view area of the reduced extended view object to the view area of the first view area is greater than another preset ratio threshold, the electronic device still displays the extended view object.
For another example, if the second preset trigger event is a priority-lowering event, the electronic device may determine that the view display manner is a downshifting view object, and then the electronic device adjusts the display sequence of the extended view object corresponding to the second preset trigger event to be below the base view object. And then, the electronic device executes hiding operation on the extended view object when judging that the view size of the extended view object is smaller than or equal to the view size of the basic view object or judging that the view area ratio of the view area of the extended view object to the first view area is smaller than or equal to another preset ratio threshold. Otherwise, if the electronic device determines that the view size of the extended view object is greater than the view size of the base view object, or determines that the ratio of the view area of the extended view object to the view area of the first view area is greater than another preset ratio threshold, the electronic device still displays the extended view object.
It should be noted that, when the second preset trigger event is a closing event, the extended view object depends on the input of the external resource of the client introduced by the corresponding first preset trigger event, and the triggering of the second preset trigger event stops the input of the external resource, so when the electronic device detects the second preset trigger event and executes the hidden extended view object, the extended view object is automatically destroyed. At this time, the problems of excessive occupation of the memory and excessive consumption of equipment operation resources do not exist. However, if the extended view object is still in the memory after being hidden and the rendering operation is performed, the operation of stopping the rendering and even destroying the extended view object may also be performed on the extended view object, so as to reduce excessive occupation of the memory and excessive consumption of the running resources of the device, as described in the above embodiments.
In some embodiments, in a case where the electronic device hides the base view object and performs an operation of stopping rendering or destroying the base view object, the process of redisplaying the base view object resumes the rendering operation or reloading the base view object in addition to modifying the attribute value of the display attribute.
In an example, if a rendered image or a preset still image corresponding to the base view object is detected, the rendering operation of the base view object is restarted, and the rendered base view object is used to replace the rendered image or the preset still image.
Specifically, the electronic device detects whether the base view object exists or not in the case that the base view object needs to be redisplayed. If so, it is further detected whether the content in the view corresponding to the base view object is a rendered image or a preset still image. If so, then the content in the view needs to be replaced with the base view object. Rendering of the base view object is then resumed. Thus, after modifying the display properties of the base view object, the user may see the rendered base view object.
In another example, if the base view object is not detected, the base view object is reloaded and displayed.
Specifically, in the case where the electronic device needs to redisplay the base view object, if it detects that the base view object does not exist in the memory, the electronic device reloads the base view object. After the base view object is obtained, rendering operations begin to be performed thereon. Thus, after modifying the display properties of the base view object, the user may see the rendered base view object.
Fig. 7 shows a schematic structural diagram of a view object processing apparatus according to an embodiment of the present disclosure. As shown in fig. 7, the view object processing apparatus 700 may include:
the extended view object display module 710 is configured to, in a display process of the base view object, superimpose and display an extended view object corresponding to the first preset trigger event if the first preset trigger event is detected;
an information analysis result determining module 720, configured to determine an information analysis result based on the first view information of the base view object and the second view information of the extended view object;
the base view object hiding module 730 is configured to hide the base view object if the information analysis result reaches a preset hiding condition.
According to the view object processing device, in the display process of the basic view object, under the condition that the first preset trigger event is detected, the extended view object corresponding to the first preset trigger event is overlapped and displayed on the basic view object, the information analysis result is determined based on the first view information of the basic view object and the second view information of the extended view object, the basic view object is hidden under the condition that the information analysis result reaches the preset hiding condition, the process of adjusting the view size of the extended view object under the condition that the view is needed to be hidden is omitted, the resource consumption is reduced to a certain extent, the basic view object is hidden in the display process of the extended view object, the resource consumption required by the basic view object continuously displayed in the view level is saved, the resource consumption of equipment is further reduced, the running smoothness of a client is improved, and the use experience of a user is improved.
In some embodiments, the information analysis result determination module 720 is specifically configured to:
determining a first view area of the base view object based on the view size in the first view information;
determining a second view area of the extended view object based on the view size in the second view information;
determining the view area proportion of the second view area to the first view area as an information analysis result;
accordingly, the base view object hiding module 730 is specifically configured to:
and hiding the basic view object if the view area ratio reaches a preset ratio threshold value.
In some embodiments, the base view object hiding module 730 is further to:
stopping the rendering operation of the base view object, and replacing the base view object by the rendered image when the rendering is stopped;
or stopping the rendering operation of the basic view object, and replacing the basic view object by using a preset static image;
alternatively, the base view object is destroyed.
In some embodiments, the view object processing apparatus 700 further comprises a base view object display module for:
after hiding the basic view object if the information analysis result reaches the preset hiding condition, redisplaying the basic view object if a second preset trigger event for the extended view object is detected, and processing the extended view object according to a view display mode corresponding to the second preset trigger event.
In some embodiments, the base view object display module is specifically configured to:
if the rendered image or the preset static image corresponding to the basic view object is detected, restarting the rendering operation of the basic view object, and replacing the rendered image or the preset static image by using the rendered basic view object;
alternatively, if the base view object is not detected, the base view object is reloaded and displayed.
In some embodiments, the extended view object display module 710 is specifically configured to:
if at least two first preset trigger events are detected, the extended view objects corresponding to the first preset trigger events are displayed in a superposition mode;
accordingly, the information analysis result determining module 720 is specifically configured to:
determining each information analysis result based on the first view information and the second view information of the at least one extended view object;
accordingly, the base view object hiding module 730 is specifically configured to:
and if any information analysis result reaches a preset hiding condition, hiding the basic view object.
In some embodiments, the extended view object display module 710 is specifically configured to:
if at least two first preset trigger events are detected, determining a target extended view object from the extended view objects based on second view information of the extended view objects corresponding to the first preset trigger events;
And superposing and displaying the target extension view object.
In some embodiments, the first preset trigger event includes at least one of activating a high-speed camera, activating a screen sharing function, activating a game function, activating an interactive function, and activating a character video function.
It should be noted that, the view object processing apparatus 700 shown in fig. 7 may perform the steps in the method embodiments shown in fig. 1 to 6, and implement the processes and effects in the method embodiments shown in fig. 1 to 6, which are not described herein.
The disclosed embodiments also provide an electronic device that may include a processor and a memory that may be used to store executable instructions. Wherein the processor may be configured to read executable instructions from the memory and execute the executable instructions to implement the view object processing method in any of the embodiments described above.
Fig. 8 shows a schematic structural diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 8, the electronic device 800 may include a processing means (e.g., a central processor, a graphics processor, etc.) 801 that may perform various appropriate actions and processes according to programs stored in a read-only memory (ROM) 802 or programs loaded from a storage 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data required for the operation of the information processing apparatus 800 are also stored. The processing device 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input/output interface (I/O interface) 805 is also connected to the bus 804.
In general, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, etc.; storage 808 including, for example, magnetic tape, hard disk, etc.; communication means 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data.
It should be noted that the electronic device 800 shown in fig. 8 is only an example, and should not impose any limitation on the functions and application scope of the embodiments of the present disclosure. While fig. 8 shows an electronic device 800 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
The embodiments of the present disclosure also provide a computer-readable storage medium storing a computer program, which when executed by a processor, causes the processor to implement the view object processing method in any of the embodiments of the present disclosure.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 809, or installed from storage device 808, or installed from ROM 802. When executed by the processing means 801, the computer program performs the above-described functions defined in the view object processing method of any embodiment of the present disclosure.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP, and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the steps of the view object processing method described in any of the embodiments of the present disclosure.
In an embodiment of the present disclosure, computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (11)

1. A method for view object processing, comprising:
in the display process of the basic view object, if a first preset trigger event is detected, the extended view object corresponding to the first preset trigger event is displayed in a superposition mode;
determining an information analysis result based on the first view information of the base view object and the second view information of the extended view object;
and if the information analysis result reaches a preset hiding condition, hiding the basic view object.
2. The method of claim 1, wherein the determining information analysis results based on the first view information of the base view object and the second view information of the extended view object comprises:
determining a first view area of the base view object based on a view size in the first view information;
determining a second view area of the extended view object based on a view size in the second view information;
determining the view area proportion of the second view area to the first view area as the information analysis result;
and if the information analysis result reaches a preset hiding condition, hiding the basic view object comprises the following steps:
And hiding the basic view object if the view area ratio reaches a preset ratio threshold value.
3. The method of claim 1, wherein hiding the base view object comprises:
stopping the rendering operation of the basic view object, and replacing the basic view object by using the rendering image when the rendering is stopped;
or stopping the rendering operation of the basic view object, and replacing the basic view object by a preset static image;
or destroying the basic view object.
4. The method according to claim 1, wherein after hiding the base view object if the information analysis result reaches a preset hiding condition, the method further comprises:
and if a second preset trigger event for the extended view object is detected, redisplaying the basic view object, and processing the extended view object according to a view display mode corresponding to the second preset trigger event.
5. The method of claim 4, wherein the redisplaying the base view object comprises:
if the rendered image or the preset static image corresponding to the basic view object is detected, restarting the rendering operation of the basic view object, and replacing the rendered image or the preset static image with the rendered basic view object;
Or if the basic view object is not detected, reloading and displaying the basic view object.
6. The method according to claim 1, wherein if a first preset trigger event is detected, displaying the extended view object corresponding to the first preset trigger event in a superimposed manner includes:
if at least two first preset trigger events are detected, displaying the extended view objects corresponding to each first preset trigger event in a superposition mode;
the determining an information analysis result based on the first view information of the basic view object and the second view information of the extended view object includes:
determining each of the information analysis results based on the first view information and the second view information of at least one of the extended view objects;
and if the information analysis result reaches a preset hiding condition, hiding the basic view object comprises the following steps:
and if any information analysis result reaches the preset hiding condition, hiding the basic view object.
7. The method according to claim 1, wherein if a first preset trigger event is detected, displaying the extended view object corresponding to the first preset trigger event in a superimposed manner includes:
If at least two first preset trigger events are detected, determining a target extended view object from the extended view objects based on the second view information of the extended view objects corresponding to the first preset trigger events;
and displaying the target extension view object in a superposition way.
8. The method of claim 1, wherein the first preset trigger event comprises at least one of activating a high-speed camera, activating a screen sharing function, activating a game function, activating an interactive function, and activating a character video function.
9. A view object processing apparatus, comprising:
the display module of the extended view object is used for displaying the extended view object corresponding to the first preset trigger event in a superposition manner if the first preset trigger event is detected in the display process of the basic view object;
an information analysis result determining module, configured to determine an information analysis result based on the first view information of the base view object and the second view information of the extended view object;
and the basic view object hiding module is used for hiding the basic view object if the information analysis result reaches a preset hiding condition.
10. An electronic device, comprising:
a processor;
a memory for storing executable instructions;
wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the view object handling method of any of the preceding claims 1-8.
11. A computer readable storage medium, characterized in that the storage medium stores a computer program, which when executed by a processor causes the processor to implement the view object processing method of any of the preceding claims 1-8.
CN202111559383.2A 2021-12-20 2021-12-20 View object processing method, device, equipment and medium Pending CN116320576A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111559383.2A CN116320576A (en) 2021-12-20 2021-12-20 View object processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111559383.2A CN116320576A (en) 2021-12-20 2021-12-20 View object processing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN116320576A true CN116320576A (en) 2023-06-23

Family

ID=86820848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111559383.2A Pending CN116320576A (en) 2021-12-20 2021-12-20 View object processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116320576A (en)

Similar Documents

Publication Publication Date Title
CN113489937B (en) Video sharing method, device, equipment and medium
CN108961165B (en) Method and device for loading image
CN111427528B (en) Display method and device and electronic equipment
CN109819268B (en) Live broadcast room play control method, device, medium and equipment in video live broadcast
WO2022096017A1 (en) Content display method and apparatus
WO2023104102A1 (en) Live broadcasting comment presentation method and apparatus, and device, program product and medium
CN115474085B (en) Media content playing method, device, equipment and storage medium
CN112954441B (en) Video editing and playing method, device, equipment and medium
WO2023169305A1 (en) Special effect video generating method and apparatus, electronic device, and storage medium
WO2023169483A1 (en) Information flow display method and apparatus, and device, storage medium and program
CN111596995B (en) Display method and device and electronic equipment
CN111796826B (en) Bullet screen drawing method, device, equipment and storage medium
CN114399437A (en) Image processing method and device, electronic equipment and storage medium
CN114679628B (en) Bullet screen adding method and device, electronic equipment and storage medium
CN111796825B (en) Bullet screen drawing method, bullet screen drawing device, bullet screen drawing equipment and storage medium
CN112612570B (en) Method and system for viewing shielded area of application program
US20230328205A1 (en) Video conference presentation method and apparatus, and terminal device and storage medium
US11847758B2 (en) Material display method and apparatus, terminal, and storage medium
CN113891135B (en) Multimedia data playing method and device, electronic equipment and storage medium
CN116320576A (en) View object processing method, device, equipment and medium
EP4207775A1 (en) Method and apparatus for determining object addition mode, electronic device, and medium
CN112312058B (en) Interaction method and device and electronic equipment
CN113986168A (en) Image display method, device, equipment and readable storage medium
CN111880869A (en) Picture adjusting method, device and system
CN110597432A (en) Interface control method and device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination