CN112069556A - Screen control method, device and system - Google Patents

Screen control method, device and system Download PDF

Info

Publication number
CN112069556A
CN112069556A CN202010917306.9A CN202010917306A CN112069556A CN 112069556 A CN112069556 A CN 112069556A CN 202010917306 A CN202010917306 A CN 202010917306A CN 112069556 A CN112069556 A CN 112069556A
Authority
CN
China
Prior art keywords
image
target
screen
display
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010917306.9A
Other languages
Chinese (zh)
Inventor
苏宁博
卢涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Wanxiang Electronics Technology Co Ltd
Original Assignee
Xian Wanxiang Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Wanxiang Electronics Technology Co Ltd filed Critical Xian Wanxiang Electronics Technology Co Ltd
Priority to CN202010917306.9A priority Critical patent/CN112069556A/en
Publication of CN112069556A publication Critical patent/CN112069556A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices
    • G06F21/84Protecting input, output or interconnection devices output devices, e.g. displays or monitors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The invention discloses a screen control method, device and system. Wherein, the method comprises the following steps: under the condition that a target event is detected within a preset range of a large screen, acquiring position information of a target area in a first image displayed in the large screen; determining a target screen corresponding to the target area based on the position information of the target area, wherein the large screen is formed by splicing a plurality of screens; and controlling the target screen to display a preset image. The invention solves the technical problem of lower security of the large screen in the related technology.

Description

Screen control method, device and system
Technical Field
The invention relates to the field of large screens, in particular to a screen control method, device and system.
Background
In a large-screen scene, a large screen is formed by splicing a plurality of screens, the viewing range of the whole large screen is wide, the condition of peeping and shooting is easy to occur, information displayed on the large screen is leaked, and the use safety of the large screen is affected.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a screen control method, a device and a system, which are used for at least solving the technical problem of low security of a large screen in the related technology.
According to an aspect of an embodiment of the present invention, there is provided a screen control method including: under the condition that a target event is detected within a preset range of a large screen, acquiring position information of a target area in a first image displayed in the large screen; determining a target screen corresponding to the target area based on the position information of the target area, wherein the large screen is formed by splicing a plurality of screens; and controlling the target screen to display a preset image.
Optionally, the target region comprises at least one of: a first display area for displaying a target application, a second display area for displaying a target element, wherein the target element comprises one of: text, pictures, and video.
Optionally, the acquiring the position information of the first display area includes: detecting whether a target application is opened or not in a process of acquiring a display image of an image source device, wherein a first image is determined based on image data of the acquired display image; and if the target application is detected to be opened, acquiring window position information of the target application to obtain position information of the first display area.
Optionally, the acquiring the position information of the second display area includes: acquiring a display image of image source equipment to obtain image data; identifying image data, and determining whether the image data contains a target element; and determining the position information of the target element in the image data to obtain the position information of the second display area under the condition that the image data contains the target element.
Optionally, determining, based on the position information of the target area, a target screen corresponding to the target area includes: dividing the first image based on a preset image segmentation mode to obtain a plurality of sub-images; determining a target sub-image where the target area is located based on the position information of the target area; and determining the screen corresponding to the target sub-image as a target screen.
Optionally, the controlling the target screen to display the preset image includes: discarding a sub-image corresponding to a target screen in the first image; and sending the preset image to a target receiving end corresponding to the target screen, wherein the target receiving end is used for controlling the target screen to display the preset image.
Optionally, the detecting the target event within the preset range includes: acquiring a second image within a preset range; and identifying the second image, and judging whether a target event occurs in a preset range.
Optionally, identifying the second image, and determining whether the target event occurs within the preset range includes: identifying a second image, and determining identity information of at least one first object contained in the second image; determining whether the at least one first object is a target object based on the identity information of the at least one first object; acquiring an image of the second object if the second object is not the target object; identifying an image of a second object, and judging whether the second object performs a first action on a large screen; if the second object executes the first action on the large screen, determining that a target event occurs in a preset range; and if the at least one first object is a target object or the second object does not perform the first action on the large screen, determining that the target event does not occur within the preset range.
Optionally, identifying the second image, and determining whether the target event occurs within the preset range includes: identifying a second image, and determining at least one image of a third object contained in the second image; identifying an image of at least one third object, and judging whether the at least one third object executes a second behavior on the large screen; if at least one third object does not execute the second behavior on the large screen, determining that no target event occurs in the preset range; and if the fourth object executes the second action on the large screen, determining that the target event occurs within the preset range.
Optionally, before detecting the target event within the preset range, the method further includes: receiving a thumbnail sent by at least one acquisition end, wherein the thumbnail is a thumbnail corresponding to a display image of image source equipment; outputting at least one thumbnail and receiving an input display instruction; determining a target thumbnail based on the display instruction; and controlling the large screen to display the image corresponding to the target thumbnail.
Optionally, the controlling the image corresponding to the large-screen display target thumbnail includes: acquiring a target display image corresponding to the target thumbnail; segmenting a target display image according to a preset image segmentation mode to obtain a plurality of sub-images; and sending the plurality of sub-images to receiving ends corresponding to the plurality of screens, wherein the receiving ends are used for controlling the corresponding screens to display the received sub-images.
According to another aspect of the embodiments of the present invention, there is also provided a screen control device, including: the acquisition module is used for acquiring the position information of a target area in a first image displayed in the large screen under the condition that a target event is detected in a preset range of the large screen; the determining module is used for determining a target screen corresponding to the target area based on the position information of the target area, wherein the large screen is formed by splicing a plurality of screens; and the control module is used for controlling the target screen to display the preset image.
According to another aspect of the embodiments of the present invention, there is also provided a screen control system including: the large screen is formed by splicing a plurality of screens and is used for displaying a first image; the image server is connected with the screens and used for acquiring the position information of the target area in the first image under the condition that the target event is detected in the preset range of the large screen, determining the target screen corresponding to the target area based on the position information of the target area and controlling the target screen to display the preset image.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, which includes a stored program, wherein when the program runs, the apparatus on which the computer-readable storage medium is located is controlled to execute the above-mentioned screen control method.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes the screen control method described above.
In the embodiment of the invention, under the condition that a target event is detected in the preset range of the large screen, the position information of the target area in the first image displayed in the large screen is obtained, the target screen corresponding to the target area is determined based on the position information of the target area, the target screen is controlled to display the preset image, and the image of the target area is not displayed any more, so that a user cannot view the information of the sensitive area in the large screen, the technical effects of avoiding information leakage and improving the safety of the large screen are achieved, and the technical problem of lower safety of the large screen in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a screen control method according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of an alternative sensitive area according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an alternative large screen control system according to an embodiment of the present invention;
FIG. 4 is a flow chart of an alternative large screen display image processing flow according to an embodiment of the present invention;
FIG. 5 is a flow chart of an alternative large screen privacy function process flow according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an alternative large-screen display image without a suspected peek event according to an embodiment of the invention;
FIG. 7 is a schematic diagram of an alternative large screen display image in the presence of a suspected peek event according to an embodiment of the invention;
FIG. 8 is a schematic diagram of a screen control apparatus according to an embodiment of the present invention; and
fig. 9 is a schematic diagram of a screen control system according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to an embodiment of the present invention, there is provided a screen control method, it should be noted that the steps shown in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that here.
Fig. 1 is a flowchart of a screen control method according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, under the condition that a target event is detected in a preset range of a large screen, position information of a target area in a first image displayed in the large screen is obtained.
The preset range in the above steps may be a range in which content displayed on a large screen can be viewed, or a range in which a peek event is likely to occur. The target event may refer to a suspected peeping event, such as, but not limited to, a peeping screen by an illegal person, a peeping screen by a shooting tool, and the like. The first image may be an image currently displayed in a large screen, which is changed in real time. The target area may be a sensitive area that is not desired to be peeped or candid, such as an area containing core information of an enterprise, or an operation area of core software, but is not limited thereto.
Optionally, the target region comprises at least one of: a first display area for displaying a target application, a second display area for displaying a target element, wherein the target element comprises one of: text, pictures, and video.
The target application may be an application that is specified in advance by a user and is regarded as a sensitive application. The target elements may be user-defined sensitive elements including, but not limited to, text, pictures, videos, and the like. As shown in fig. 2, both the sensitive application display area and the sensitive element display area may be marked as sensitive areas.
In an optional embodiment, whether a suspected peeping event occurs may be detected by shooting an image within a preset range and further by means of image recognition. In the case that the suspected peeping event is determined to occur, in order to realize the peeping prevention function, whether the target area is included or not can be judged by identifying the first image currently displayed on the large screen, and if the target area is included in the first image, the position coordinates of the target area in the whole image can be determined.
And step S104, determining a target screen corresponding to the target area based on the position information of the target area, wherein the large screen is formed by splicing a plurality of screens.
It should be noted that the large screen is formed by splicing a plurality of screens, and the first image displayed in the large screen is obtained by splicing the sub-images displayed in each screen, so that the image to be displayed can be divided in advance, and the corresponding sub-images are displayed by each screen.
In order to realize the peep-proof function, if the display of the image of the whole large screen is stopped, the control process is complicated, and the viewing experience of other people is influenced. In an alternative embodiment, in order to avoid the above situation, it may be determined which screen or screens the target area is displayed on based on the position information of the target area, so as to determine that the screen displaying the target area is the target screen, and implement the peep-proof function by controlling the target screen to stop displaying the image.
And step S106, controlling the target screen to display a preset image.
The preset image in the above steps may be a preset replacement image, and the image may be a solid background image, or may be a specific image, but is not limited thereto.
In an optional embodiment, in order to avoid the information of the sensitive area from being leaked, when a suspected peeping event occurs, the display of the image in the target screen may be stopped, and the target screen is controlled to display a preset image, so that the user cannot see the information of the sensitive area on the large screen.
According to the scheme provided by the embodiment of the invention, under the condition that the target event is detected in the preset range of the large screen, the position information of the target area in the first image displayed in the large screen is obtained, the target screen corresponding to the target area is determined based on the position information of the target area, the target screen is controlled to display the preset image, and the image of the target area is not displayed any more, so that a user cannot view the information of the sensitive area in the large screen, the technical effects of avoiding information leakage and improving the safety of the large screen are achieved, and the technical problem of low safety of the large screen in the related technology is solved.
Optionally, the acquiring the position information of the first display area includes: detecting whether a target application is opened or not in a process of acquiring a display image of an image source device, wherein a first image is determined based on image data of the acquired display image; and if the target application is detected to be opened, acquiring window position information of the target application to obtain position information of the first display area.
In the whole large-screen system, each screen can be connected with an image server through an R end, at least one image source device can be connected with the image server through an S end, the S end is used for collecting display images of the image source device, and the R end is used for controlling the screen to display the images.
In an alternative embodiment, a list of applications to be listened to may be set in advance at the S-side, where the list is an interface provided to the user, and the user may add a specific sensitive application to the list. The S terminal monitors the application in the application list while acquiring the display image of the image source equipment, and when the application in the list is opened, the window position information of the application can be acquired in real time, and the window position information of the sensitive application and the acquired image data of the display image are sent to the image server together, so that the image server can determine the display area of the sensitive application.
Optionally, the acquiring the position information of the second display area includes: acquiring a display image of image source equipment to obtain image data; identifying image data, and determining whether the image data contains a target element; and determining the position information of the target element in the image data to obtain the position information of the second display area under the condition that the image data contains the target element.
In an alternative embodiment, the sensitive element can be customized by a user, and is recognized by an encoder at the S terminal, and when the sensitive element is recognized, the display area of the sensitive element can be determined.
Optionally, determining, based on the position information of the target area, a target screen corresponding to the target area includes: dividing the first image based on a preset image segmentation mode to obtain a plurality of sub-images; determining a target sub-image where the target area is located based on the position information of the target area; and determining the screen corresponding to the target sub-image as a target screen.
The preset image segmentation manner in the above step may be determined according to a combination manner of multiple screens, for example, as shown in fig. 3, for a large screen composed of four screens, the preset image segmentation manner may be "tian" font, where R1 corresponds to a sub-image at the upper left corner after segmentation, R2 corresponds to a sub-image at the upper right corner after segmentation, R3 corresponds to a sub-image at the lower left corner after segmentation, R4 corresponds to a sub-image at the lower right corner after segmentation, screen 1 is connected to R1, screen 2 is connected to R2, screen 3 is connected to R3, and screen 4 is connected to R4.
In an alternative embodiment, the first image displayed in the large screen may be divided based on a combination manner of the plurality of screens to obtain sub-images displayed by each screen, and after the position information of the target area is determined, which sub-image or sub-images the target area is located in may be determined, so that which screen or screens the target area is displayed by may be determined.
Optionally, the controlling the target screen to display the preset image includes: discarding a sub-image corresponding to a target screen in the first image; and sending the preset image to a target receiving end corresponding to the target screen, wherein the target receiving end is used for controlling the target screen to display the preset image.
The receiving end in the above steps may be an R end of the control screen, and the target receiving end may be a target R end.
In an optional embodiment, the image server may discard the image data of the sub-image to be sent to the target R terminal, send the image data of the preset image to the target R terminal, and the target R terminal decodes the image data of the preset image and displays the decoded image data on a corresponding screen, so that the screen does not display the sub-image of the original display image any more, and thus information of the sensitive region is not leaked.
It should be noted that the sub-image to be sent to the target R end includes a sensitive area.
Optionally, the detecting the target event within the preset range includes: acquiring a second image within a preset range; and identifying the second image, and judging whether a target event occurs in a preset range.
In an optional embodiment, a camera mounted on the large screen may be used to acquire a second image within a preset range, and the acquired second image is sent to the image server, and the image server may determine whether a suspected peeping event occurs at present according to the second image.
It should be noted that the camera is usually arranged on the front panel of the large screen, or a separate camera is used to be placed around the large screen. The camera can shoot pictures in a certain range in front of the large screen by adjusting the position of the camera, and a second image is obtained.
It should be further noted that the camera acquires the second image in real time, the image server judges whether a suspected peeping event occurs in real time, and if it is determined that the suspected peeping event does not occur, the above steps may be repeated.
Optionally, identifying the second image, and determining whether the target event occurs within the preset range includes: identifying a second image, and determining identity information of at least one first object contained in the second image; determining whether the at least one first object is a target object based on the identity information of the at least one first object; acquiring an image of the second object if the second object is not the target object; identifying an image of a second object, and judging whether the second object performs a first action on a large screen; if the second object executes the first action on the large screen, determining that a target event occurs in a preset range; and if the at least one first object is a target object or the second object does not perform the first action on the large screen, determining that the target event does not occur within the preset range.
The first object in the above steps may refer to a user viewing a large screen, and the target object may refer to a legal user viewing the large screen, for example, the target object may be a legal user or a legal viewer, but not limited thereto. The first behavior mentioned above can be, but is not limited to, peeping a large screen.
In an optional embodiment, the identity information of each user in the second image can be identified through AI algorithm identification, whether other people exist in the second image besides the legal user is determined, if the other people exist, the actions, the eyes, the positions of the other people and the like can be identified based on the images of the other people for analysis, whether the other people peep the large screen is judged, and whether a suspected peeping event occurs in the second image is further identified.
Optionally, identifying the second image, and determining whether the target event occurs within the preset range includes: identifying a second image, and determining at least one image of a third object contained in the second image; identifying an image of at least one third object, and judging whether the at least one third object executes a second behavior on the large screen; if at least one third object does not execute the second behavior on the large screen, determining that no target event occurs in the preset range; and if the fourth object executes the second action on the large screen, determining that the target event occurs within the preset range.
The third object in the above steps may be, but is not limited to, a candid camera or a user watching a large screen. The second behavior may be a behavior of performing a candid photograph using a candid photograph tool, but is not limited thereto.
In an optional embodiment, the AI algorithm may be used for identifying, in the second image, an image of the candid camera or a human body motion image, determining whether a person uses the candid camera to candid the second image, and further identifying whether a suspected candid camera event occurs in the second image.
Optionally, before detecting the target event within the preset range, the method further includes: receiving a thumbnail sent by at least one acquisition end, wherein the thumbnail is a thumbnail corresponding to a display image of image source equipment; outputting at least one thumbnail and receiving an input display instruction; determining a target thumbnail based on the display instruction; and controlling the large screen to display the image corresponding to the target thumbnail.
In an alternative embodiment, the image server may receive the thumbnail of the image sent by the S-side and display the thumbnail of the image. After the user sees the thumbnail of the image in the image server, the user can determine the image to be displayed on the large screen through key mouse or touch operation, the control end can generate a display instruction according to the operation action of the user and send the display instruction to the image server, wherein the display instruction comprises an S-terminal identification code to be displayed. After receiving the display instruction from the user, the image server may determine image data of the display image to be displayed on the large screen according to the S-terminal identification code in the display instruction.
It should be noted that, an S-terminal identification code corresponds to an S-terminal, that is, an image source device. Since the image server may be connected to a plurality of S terminals, and one S terminal is connected to one image source device, it is necessary for a user to determine which S terminal acquired image is to be displayed on a large screen through an S terminal identification code carried by a display instruction.
Optionally, the controlling the image corresponding to the large-screen display target thumbnail includes: acquiring a target display image corresponding to the target thumbnail; segmenting a target display image according to a preset image segmentation mode to obtain a plurality of sub-images; and sending the plurality of sub-images to receiving ends corresponding to the plurality of screens, wherein the receiving ends are used for controlling the corresponding screens to display the received sub-images.
In an optional embodiment, the image server may decode the image data of the display image, divide the display image into sub-images corresponding to the respective screens according to a preset image division manner, encode the sub-images, generate image data of the sub-images, and send the image data of the sub-images to the R terminal connected to the respective screens. And the R end decodes the received coded data and displays the sub-images corresponding to the screens on the connected screens.
A preferred embodiment of the present invention will be described in detail with reference to fig. 3 to 7.
As shown in fig. 3, the large screen includes four screens from screen 1 to screen 4, each screen is connected to the R terminal through a video line HDMI, the video line is shown as a thick line in fig. 3, each R terminal is connected to the image server through a network cable, and the image server is connected to the S terminal through a network cable. The user can set the display mode of the image source device on each screen of the large screen through the image server. Specifically, a display image of one image source device may be displayed by a combination of a plurality of screens.
It should be noted that, according to actual needs, a plurality of S terminals may be set to connect with the image server, the R terminal may be placed locally with the large screen, and the image server may be placed in a machine room in a different place.
The top end of the large screen is provided with a camera 00, and the camera 00 is connected with an image server.
As shown in fig. 4, the processing flow of the large screen display image is as follows:
and step S41, the image server receives the image collected by the terminal S and displays the thumbnail of the image.
In step S42, the image server receives a display instruction from the user, and determines image data of a display image to be displayed on the large screen according to the display instruction.
Step S43, the image server encodes the image data of the display image, divides the display image into sub-images corresponding to each screen according to a preset image division manner, encodes each sub-image, generates image data of the sub-image, and sends the image data of the sub-image to the R terminal connected to each screen.
In step S44, the R-side encodes the received screen data and displays each sub-image on the connected screen.
For example, as shown in fig. 3, R1 connects screen 1, R2 connects screen 2, R3 connects screen 3, and R4 connects screen 4, where R1 corresponds to the sub-image at the top left corner after division, R2 corresponds to the sub-image at the top right corner after division, R3 corresponds to the sub-image at the bottom left corner after division, and R4 corresponds to the sub-image at the bottom right corner after division.
As shown in fig. 5, the processing flow of the peep-proof function is as follows:
and step S51, the camera on the large screen acquires the user image and sends the acquired user image to the image server.
And step S52, the image server side judges whether a suspected peeping event occurs at present according to the user image.
Optionally, if a suspected spy event occurs, step S53 is executed, and if a suspected spy event does not occur, step S51 is executed again.
In step S53, after receiving the image data of the display image sent by the S-side, the image server acquires the position information of the sensitive area from the image data.
Step S54, the image server decodes the image data of the display image, determines the R-side corresponding to the frame image according to the position information of the sensitive region and the preset image segmentation mode, and records the R-side corresponding to the sensitive region as the target R-side.
For example, as shown in fig. 6, the outer frame represents a large screen, 4 boxes represent 4 screens as shown in fig. 3, a circle represents the current frame image, and a solid rectangle represents a sensitive area. It can be determined that the sensitive regions are located in the upper right hand small graph and the lower left hand small graph after segmentation, and the corresponding target R ends are R2 and R3.
In step S55, the image server discards the image data of the thumbnail to be sent to the target R-side, and sends the image data of the preset image to the target R-side.
In step S56, the target R-side decodes the image data of the preset image and displays the decoded image data on the connected screen.
For example, as shown in fig. 7, when it is determined that a plausible peeking event occurs, the target R ends corresponding to the sensitive areas are determined to be R2 and R3, the image server may send image data (black images) of preset images to R2 and R3, and then black images are displayed on the screen 2 connected to R2 and the screen 3 connected to R3, so that the user cannot see the sensitive areas on the large screen.
Through the steps, when a suspected peeping event around the large screen is determined, firstly, according to a preset display mode, determining an R end corresponding to a small image to which a sensitive area belongs after a current frame image is segmented; then, the image data of the current frame which should be originally sent to the R end is discarded, and the preset replacement image is sent to the R end, so that the user cannot see the information of the sensitive area in the large screen, and information leakage can be avoided.
Example 2
According to an embodiment of the present invention, a screen control device is provided, which can execute the screen control method provided in the above embodiment, and the specific implementation manner in the two embodiments is the same as that in the preferred embodiment, which is not described herein again.
Fig. 8 is a schematic diagram of a screen control apparatus according to an embodiment of the present invention, as shown in fig. 8, the apparatus including:
the acquiring module 82 is configured to acquire position information of a target area in a first image displayed in a large screen when a target event is detected within a preset range of the large screen;
a determining module 84, configured to determine a target screen corresponding to a target area based on position information of the target area, where a large screen is formed by splicing multiple screens;
and the control module 86 is used for controlling the target screen to display the preset image.
Optionally, the obtaining module includes: the device comprises a detection unit, a processing unit and a display unit, wherein the detection unit is used for detecting whether a target application is opened or not in the process of acquiring a display image of an image source device, and a first image is determined based on the acquired image data of the display image; and the first acquisition unit is used for acquiring the window position information of the target application to obtain the position information of the first display area if the target application is detected to be opened.
Optionally, the obtaining module includes: the acquisition unit is used for acquiring a display image of image source equipment to obtain image data; a first recognition unit configured to recognize the image data and determine whether the image data contains a target element; and the first determining unit is used for determining the position information of the target element in the image data to obtain the position information of the second display area under the condition that the image data contains the target element.
Optionally, the determining module includes: the dividing unit is used for dividing the first image based on a preset image segmentation mode to obtain a plurality of sub-images; the second determining unit is used for determining a target sub-image where the target area is located based on the position information of the target area; and the third determining unit is used for determining that the screen corresponding to the target sub-image is the target screen.
Optionally, the control module comprises: the discarding unit is used for discarding the sub-image corresponding to the target screen in the first image; the first sending unit is used for sending the preset image to a target receiving end corresponding to the target screen, wherein the target receiving end is used for controlling the target screen to display the preset image.
Optionally, the apparatus further comprises: the detection module is used for detecting a target event in a preset range, and comprises: the second acquisition unit is used for acquiring a second image within a preset range; and the second identification unit is used for identifying the second image and judging whether a target event occurs in a preset range.
Optionally, the second identification unit comprises: the first identification subunit is used for identifying the second image and determining the identity information of at least one first object contained in the second image; a first determining subunit, configured to determine whether the at least one first object is a target object based on the identity information of the at least one first object; a first acquisition subunit configured to acquire an image of the second object if the second object is not the target object; the second identification subunit is used for identifying the image of the second object and judging whether the second object executes the first action on the large screen; the first determining subunit is used for determining that a target event occurs within a preset range if the second object performs a first action on the large screen; and the second determining subunit is used for determining that the target event does not occur within the preset range if the at least one first object is the target object or the second object does not perform the first action on the large screen.
Optionally, the second identification unit comprises: a third identifying subunit, configured to identify the second image and determine an image of at least one third object included in the second image; the fourth identification subunit is used for identifying the image of the at least one third object and judging whether the at least one third object executes a second behavior on the large screen; the third determining subunit is configured to determine that a target event does not occur within the preset range if at least one third object does not execute the second behavior on the large screen; and the fourth determining subunit is used for determining that the target event occurs within the preset range if the fourth object executes the second action on the large screen.
Optionally, the apparatus further comprises: the receiving module is used for receiving the thumbnail sent by at least one acquisition end, wherein the thumbnail is a thumbnail corresponding to a display image of the image source equipment; the output module is used for outputting at least one thumbnail and receiving an input display instruction; the determining module is further used for determining a target thumbnail based on the display instruction; the control module is also used for controlling the large screen to display the image corresponding to the target thumbnail.
Optionally, the control module comprises: a third acquiring unit, configured to acquire a target display image corresponding to the target thumbnail; the segmentation unit is used for segmenting the target display image according to a preset image segmentation mode to obtain a plurality of sub-images; and the second sending unit is used for sending the plurality of sub-images to receiving ends corresponding to the plurality of screens, wherein the receiving ends are used for controlling the corresponding screens to display the received sub-images.
Example 3
According to an embodiment of the present invention, a screen control system is provided, where the device may execute the screen control method provided in the foregoing embodiment, and specific implementation manners in the two embodiments are the same as those in the preferred embodiment, which is not described herein again.
Fig. 9 is a schematic diagram of a screen control system according to an embodiment of the present invention, as shown in fig. 9, the system including: a large screen 92 formed by splicing a plurality of screens 922, and an image server 94 connected to the plurality of screens 922.
Wherein the large screen 92 is used for displaying a first image; the image server 94 is configured to, when a target event is detected within a preset range of the large screen, acquire position information of a target area in the first image, determine a target screen corresponding to the target area based on the position information of the target area, and control the target screen to display a preset image.
Optionally, the system further comprises: and the acquisition end is connected with the image source equipment and the image server.
The acquisition terminal is used for detecting whether a target application is opened or not in the process of acquiring a display image of image source equipment, acquiring window position information of the target application if the target application is opened, and sending the window position information to the image server, wherein a first image is determined based on the acquired image data of the display image; the image server is used for determining the position information of the first display area based on the window position information.
Optionally, the system further comprises: and the acquisition end is connected with the image source equipment and the image server.
The acquisition terminal is used for acquiring a display image of image source equipment, obtaining image data, identifying the image data, determining whether the image data contains a target element, determining the position information of the target element in the image data under the condition that the image data contains the target element, and sending the position information of the target element in the image data to the image server; the image server is used for determining the position information of the second display area based on the position information of the target element in the image data.
Optionally, the image server is further configured to divide the first image based on a preset image segmentation manner to obtain a plurality of sub-images, determine a target sub-image where the target area is located based on the position information of the target area, and determine that a screen corresponding to the target sub-image is a target screen.
Optionally, the system further comprises: and each receiving end is connected with the corresponding screen and the image server.
The image server is further used for discarding the sub-image corresponding to the target screen in the first image and sending the preset image to the target receiving end corresponding to the target screen; the target receiving end is used for controlling a target screen to display a preset image.
Optionally, the system further comprises: and the acquisition device is connected with the image server.
The acquisition device is used for acquiring a second image within a preset range; the image server is also used for identifying the second image and judging whether a target event occurs in a preset range.
Optionally, the image server is further configured to identify the second image, and determine identity information of at least one first object included in the second image; determining whether the at least one first object is a target object based on the identity information of the at least one first object; acquiring an image of the second object if the second object is not the target object; identifying an image of a second object, and judging whether the second object performs a first action on a large screen; if the second object executes the first action on the large screen, determining that a target event occurs in a preset range; and if the at least one first object is a target object or the second object does not perform the first action on the large screen, determining that the target event does not occur within the preset range.
Optionally, the image server is further configured to identify a second image, and determine an image of at least one third object included in the second image; identifying an image of at least one third object, and judging whether the at least one third object executes a second behavior on the large screen; if at least one third object does not execute the second behavior on the large screen, determining that no target event occurs in the preset range; and if the fourth object executes the second action on the large screen, determining that the target event occurs within the preset range.
Optionally, the system further comprises: and each acquisition end is connected with the corresponding image source equipment and the image server.
The image acquisition terminal is used for acquiring a thumbnail image of the image source device; the image server is also used for outputting at least one thumbnail and receiving an input display instruction; determining a target thumbnail based on the display instruction; and controlling the large screen to display the image corresponding to the target thumbnail.
Optionally, the system further comprises: and each receiving end is connected with the corresponding screen and the image server.
The image server is further used for acquiring a target display image corresponding to the target thumbnail, dividing the target display image according to a preset image dividing mode to obtain a plurality of sub-images, and sending the plurality of sub-images to a plurality of receiving terminals; each receiving end is used for controlling the corresponding screen to display the received sub-images.
Example 4
According to an embodiment of the present invention, there is provided a computer-readable storage medium including a stored program, wherein, when the program runs, an apparatus in which the computer-readable storage medium is located is controlled to execute the screen control method in the above-described embodiment 1.
Example 5
According to an embodiment of the present invention, there is provided a processor for running a program, wherein the program executes the screen control method in embodiment 1 described above when running.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (13)

1. A screen control method, comprising:
under the condition that a target event is detected within a preset range of a large screen, acquiring position information of a target area in a first image displayed in the large screen;
determining a target screen corresponding to the target area based on the position information of the target area, wherein the large screen is formed by splicing a plurality of screens;
and controlling the target screen to display a preset image.
2. The method of claim 1, wherein the target region comprises at least one of: a first display area for displaying a target application, a second display area for displaying a target element, wherein the target element comprises one of: text, pictures, and video.
3. The method of claim 2, wherein obtaining the location information of the first display region comprises:
detecting whether the target application is opened or not in the process of acquiring a display image of an image source device, wherein the first image is determined based on the acquired image data of the display image;
and if the target application is detected to be opened, acquiring window position information of the target application to obtain position information of the first display area.
4. The method of claim 2, wherein obtaining the location information of the second display region comprises:
acquiring a display image of image source equipment to obtain image data;
identifying the image data, determining whether the image data contains the target element;
and under the condition that the image data contains the target element, determining the position information of the target element in the image data to obtain the position information of the second display area.
5. The method of claim 1, wherein determining the target screen corresponding to the target area based on the position information of the target area comprises:
dividing the first image based on a preset image segmentation mode to obtain a plurality of sub-images;
determining a target sub-image where the target area is located based on the position information of the target area;
and determining the screen corresponding to the target sub-image as the target screen.
6. The method of claim 1, wherein controlling the target screen to display a preset image comprises:
discarding a sub-image corresponding to the target screen in the first image;
and sending the preset image to a target receiving end corresponding to the target screen, wherein the target receiving end is used for controlling the target screen to display the preset image.
7. The method of claim 1, wherein detecting the target event within the preset range comprises:
acquiring a second image within the preset range;
and identifying the second image, and judging whether a target event occurs in the preset range.
8. The method of claim 7, wherein recognizing the second image and determining whether a target event occurs within the preset range comprises:
identifying the second image, and determining identity information of at least one first object contained in the second image;
determining whether the at least one first object is a target object based on the identity information of the at least one first object;
acquiring an image of a second object if the second object is not the target object;
identifying an image of the second object, and judging whether the second object performs a first action on the large screen;
if the second object executes the first behavior on the large screen, determining that the target event occurs within the preset range;
and if the at least one first object is the target object or the second object does not execute the first behavior on the large screen, determining that the target event does not occur within the preset range.
9. The method of claim 7, wherein recognizing the second image and determining whether a target event occurs within the preset range comprises:
identifying the second image, and determining an image of at least one third object contained in the second image;
identifying the image of the at least one third object, and judging whether the at least one third object executes a second action on the large screen;
if the second behavior is not executed on the large screen by the at least one third object, determining that the target event does not occur within the preset range;
and if the fourth object executes the second action on the large screen, determining that the target event occurs within the preset range.
10. A screen control apparatus, comprising:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring the position information of a target area in a first image displayed in a large screen under the condition that a target event is detected in a preset range of the large screen;
the determining module is used for determining a target screen corresponding to the target area based on the position information of the target area, wherein the large screen is formed by splicing a plurality of screens;
and the control module is used for controlling the target screen to display a preset image.
11. A screen control system, comprising:
the large screen is formed by splicing a plurality of screens and is used for displaying a first image;
and the image server is connected with the screens and used for acquiring the position information of the target area in the first image under the condition that a target event is detected in the preset range of the large screen, determining a target screen corresponding to the target area based on the position information of the target area and controlling the target screen to display a preset image.
12. A computer-readable storage medium, comprising a stored program, wherein when the program runs, the computer-readable storage medium controls an apparatus to execute the screen control method according to any one of claims 1 to 9.
13. A processor, characterized in that the processor is configured to run a program, wherein the program is configured to execute the screen control method according to any one of claims 1 to 9 when running.
CN202010917306.9A 2020-09-03 2020-09-03 Screen control method, device and system Pending CN112069556A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010917306.9A CN112069556A (en) 2020-09-03 2020-09-03 Screen control method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010917306.9A CN112069556A (en) 2020-09-03 2020-09-03 Screen control method, device and system

Publications (1)

Publication Number Publication Date
CN112069556A true CN112069556A (en) 2020-12-11

Family

ID=73666421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010917306.9A Pending CN112069556A (en) 2020-09-03 2020-09-03 Screen control method, device and system

Country Status (1)

Country Link
CN (1) CN112069556A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389527A (en) * 2015-10-27 2016-03-09 努比亚技术有限公司 Peek prevention apparatus and method for mobile terminal
CN106557711A (en) * 2016-11-04 2017-04-05 深圳大学 The screen privacy guard method of mobile terminal device and system
CN107679426A (en) * 2017-09-08 2018-02-09 维沃移动通信有限公司 A kind of screen content display method thereof and mobile terminal
CN107992730A (en) * 2017-11-28 2018-05-04 宇龙计算机通信科技(深圳)有限公司 A kind of screen message guard method and device
CN111125696A (en) * 2019-12-31 2020-05-08 维沃移动通信有限公司 Information prompting method and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389527A (en) * 2015-10-27 2016-03-09 努比亚技术有限公司 Peek prevention apparatus and method for mobile terminal
CN106557711A (en) * 2016-11-04 2017-04-05 深圳大学 The screen privacy guard method of mobile terminal device and system
CN107679426A (en) * 2017-09-08 2018-02-09 维沃移动通信有限公司 A kind of screen content display method thereof and mobile terminal
CN107992730A (en) * 2017-11-28 2018-05-04 宇龙计算机通信科技(深圳)有限公司 A kind of screen message guard method and device
CN111125696A (en) * 2019-12-31 2020-05-08 维沃移动通信有限公司 Information prompting method and electronic equipment

Similar Documents

Publication Publication Date Title
CN109191730B (en) Information processing apparatus
Çiftçi et al. A reliable and reversible image privacy protection based on false colors
CN111711794A (en) Anti-candid image processing method and device, terminal and storage medium
KR101975247B1 (en) Image processing apparatus and image processing method thereof
KR100669837B1 (en) Extraction of foreground information for stereoscopic video coding
KR20180091915A (en) Dynamic video overlay
CN111310134B (en) Screen watermark generation method, device and equipment
CN112257124A (en) Image processing method and device
CN111988672A (en) Video processing method and device, electronic equipment and storage medium
CN113012034A (en) Method, device and system for image display processing
CN112257123A (en) Image processing method and system
CN113552989A (en) Screen recording method and device and electronic equipment
CN111294543A (en) System and method for video monitoring photographing protection
CN112912882A (en) Control method and device of terminal equipment and storage medium
CN112069556A (en) Screen control method, device and system
CN117037270A (en) Screen anti-shooting system and method based on image processing
CN111177770B (en) Sensitive information protection method, mobile equipment and storage device
JP2008217675A (en) Information browsing system, terminal, control method, control program and storage medium
CN110135204A (en) Data guard method and device
JP2009156948A (en) Display control device, display control method, and display control program
CN116094811A (en) Secret information anti-photographing alarm method, system, equipment and readable storage medium
KR102347137B1 (en) Screen data leakage prevention apparatus and method
CN112004065B (en) Video display method, display device and storage medium
CN114339449A (en) Copyright protection method for embedding watermark in display system
CN112051977B (en) Screen control method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201211

RJ01 Rejection of invention patent application after publication