CN110490140A - Screen display state method of discrimination, device, computer equipment and storage medium - Google Patents
Screen display state method of discrimination, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN110490140A CN110490140A CN201910773812.2A CN201910773812A CN110490140A CN 110490140 A CN110490140 A CN 110490140A CN 201910773812 A CN201910773812 A CN 201910773812A CN 110490140 A CN110490140 A CN 110490140A
- Authority
- CN
- China
- Prior art keywords
- screen
- image
- human body
- working region
- display state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/70—Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
- G06F21/82—Protecting input, output or interconnection devices
- G06F21/84—Protecting input, output or interconnection devices output devices, e.g. displays or monitors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Abstract
This application involves a kind of screen display state method of discrimination, device, computer equipment and storage mediums.The described method includes: obtaining working region image;It detects in the working region image with the presence or absence of characteristics of human body;When the characteristics of human body is not detected, screen area image is extracted from the working region image;The screen display state is differentiated according to the Pixel Information of the screen area image.It can be improved the identification effect of screen display state using this method.
Description
Technical field
This application involves such as field of computer technology, more particularly to a kind of screen display state method of discrimination, device, meter
Calculate machine equipment and storage medium.
Background technique
With the development of information technology, played increasingly in the more and more common and life at us of electronic office
Important role.A large amount of data is saved in electronic equipment, and wherein many data are all confidential data, do in electronization
How to ensure that the safety of data in electronic equipment is an important task in public process.
In order to ensure the safety of the electronic bits of data in electronic equipment, generally needing after staff leaves station will be electric
Sub- equipment carries out screen locking, to prevent the leakage of information in electronic equipment.In traditional technology, detected after staff leaves station
Whether screen locking is typically based on screen manually checks, display state that cannot in real time to electronic curtain based on the scheme manually checked
It is monitored, reduces the efficiency that the display state to electronic curtain is monitored.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide a kind of can be improved and differentiate screen display state efficiency
Method, apparatus, computer equipment and storage medium.
A kind of screen display state method of discrimination, which comprises
Obtain working region image;
It detects in the working region image with the presence or absence of characteristics of human body;
When the characteristics of human body is not detected, screen area image is extracted from the working region image;
The screen display state is differentiated according to the Pixel Information of the screen area image.
In one of the embodiments, the method also includes:
When detecting the characteristics of human body, obtain and the associated personnel characteristics of human body of the working region image;
Calculate the matching degree of the characteristics of human body Yu the personnel characteristics of human body;
When the matching degree is less than preset matching degree threshold value, screen area figure is extracted from the working region image
Picture differentiates the screen display state according to the Pixel Information of the screen area image.
It is described in one of the embodiments, to differentiate that the screen is shown according to the Pixel Information of the screen area image
After state, comprising:
When determining the screen display state is bright screen, the screen display state is changed to screen protection shape
State.
It is described in one of the embodiments, to differentiate that the screen is shown according to the Pixel Information of the screen area image
State, comprising:
Obtain total pixel value of the screen area image;
Bianry image is converted by the screen area image, calculates the valid pixel value of the bianry image;
Calculate the ratio of the valid pixel value Yu total pixel value;
When the ratio be greater than presetted pixel threshold value when, determine the screen display state be bright screen, when the ratio not
When greater than presetted pixel threshold value, determine the screen display state for screen protection state.
In one of the embodiments, the method also includes:
Video flowing is obtained, extracts multiple working region images from the video flowing according to predeterminated frequency;
In multiple continuous described working region images in preset duration, the characteristics of human body is not detected,
And when determining the screen display state and being bright screen, the screen display state is changed to screen protection state.
It whether there is characteristics of human body in the detection working region image in one of the embodiments, comprising:
The working region image is inputted into human body attitude machine learning model;
Key point coordinate is obtained according to the human body attitude machine learning model;
It connects each key point coordinate and obtains profile diagram;
When the similarity of the profile diagram and human body contour outline figure is more than default similarity threshold, the working region is determined
There are characteristics of human body in image, when the similarity is less than default similarity threshold, determine in the working region image
There is no characteristics of human body.
It is described in one of the embodiments, that screen area image is extracted from the working region image, comprising:
The working region image is inputted into target detection machine learning model, obtains matching under multiple scale match patterns
The location information and matching probability in region;
Obtain the location information of the maximum matching area of the matching probability;
Screen area image corresponding with the location information is extracted from the working region image.
A kind of screen display state discriminating gear, described device include:
Image collection module, for obtaining working region image;
Characteristics of human body's detection module, for detecting in the working region image with the presence or absence of characteristics of human body;
Screen picture extraction module, for being mentioned from the working region image when the characteristics of human body is not detected
Take screen area image;
First display condition discrimination module, for differentiating that the screen is aobvious according to the Pixel Information of the screen area image
Show state.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processing
The step of device realizes the above method when executing the computer program.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
The step of above method is realized when row.
Above-mentioned screen display state method of discrimination, device, computer equipment and storage medium obtain working region image,
Characteristics of human body's judgement is carried out to the working region image got first, when characteristics of human body is not present, triggering screen shows shape
State detection instruction, the Pixel Information by obtaining screen differentiate screen display state, realize to the automatic of screen display state
Change detection and differentiate, improves the identification effect of screen display state.
Detailed description of the invention
Fig. 1 is the application scenario diagram of screen display state method of discrimination in one embodiment;
Fig. 2 is the flow diagram of screen display state method of discrimination step in one embodiment;
Fig. 3 is the flow diagram of screen display state method of discrimination step in another embodiment;
Fig. 4 is the structural block diagram of screen display state discriminating gear in one embodiment;
Fig. 5 is the internal structure chart of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not
For limiting the application.
Screen display state method of discrimination provided by the present application, can be applied in application environment as shown in Figure 1.Its
In, server 104 is communicated by network with terminal 102, and server 104 obtains the working region image of acquisition, detects institute
It states in the image of working region with the presence or absence of characteristics of human body;When the characteristics of human body is not detected, from the working region image
Middle extraction screen area image;The screen display state is differentiated according to the Pixel Information of the screen area image.Work as service
Device 104 determines screen display state when being bright screen state, generates screen protection and instructs and be sent to terminal 102, and 102, terminal
It is instructed according to screen protection, its screen display state is changed to screen protection state.
Wherein, terminal 102 can be, but not limited to be various personal computers, laptop, smart phone, tablet computer
With portable wearable device, server 104 can use the server set of the either multiple server compositions of independent server
Group realizes.
In one embodiment, as shown in Fig. 2, providing a kind of screen display state method of discrimination, it can be applied to terminal,
Also it can be applied to server, below in an example by taking screen display state method of discrimination is applied to server 104 as an example
It is illustrated, specifically includes the following steps:
Step S210 obtains working region image.
Working region is the operating area of staff's operation.Working region image is got by imaging technique
Digital picture.Specifically, working region image can be the image that captured in real-time obtains, or obtain from captured in real-time
Video flowing in the picture frame that intercepts.
Step S220 is detected and be whether there is characteristics of human body in the image of working region.
Characteristics of human body is the feature for characterizing human figure, can be used to identify human body.Such as characteristics of human body can be people
Skeleton character, hand-characteristic of human body of body etc., in another embodiment, characteristics of human body may be face characteristic etc..Clothes
Business device obtains working region image, and carrying out feature extraction to the working region image of acquisition specifically can use feature extraction
Algorithm carries out feature extraction.The characteristic point in the image of working region is extracted first with feature extraction algorithm, then utilizes feature
Whether recognizer judges comprising characteristics of human body in the characteristic point extracted, as long as human body can be characterized by having in the characteristic point extracted
Characteristic point, it can judge in the image of working region comprising characteristics of human body, and then judge there is staff in working region.
In another embodiment, human body spy is directly detected from the image of working region using characteristics of human body's detection algorithm
Sign judges to sentence comprising staff when characteristics of human body is not detected in the image of working region when detecting characteristics of human body
Characteristics of human body is not included in disconnected working region image.
In one embodiment, a station is included at least in the image of working region, in general, a station corresponding one
Computer equipment used in a work and a staff.When in the image of working region including a station, detection
It whether include characteristics of human body in the corresponding working region of the station;When in the image of working region including multiple stations, examine respectively
It whether surveys in the corresponding working region of each station comprising characteristics of human body, the station that characteristics of human body is not detected is extracted as target work
Position.
Step S230 extracts screen area image when characteristics of human body is not detected from the image of working region.
Screen area can be the corresponding image-region of display of computer equipment, can be used for showing letter in screen area
Breath.Specifically, when display is in running order, screen area is in bright screen state, can be used for showing information;Work as display
When in off working state, screen area is in guard mode, and protectable information is not leaked under guard mode, improves information
Safety.Wherein the screen under guard mode refers to that the information in screen can not be directly viewable, such as can be the screen that goes out
State, screen lock state also can use shelter and block etc. to screen.
Specifically, when not including characteristics of human body from the characteristic point recognized in the image of working region, or from workspace
When characteristics of human body being not detected in area image, show not including staff in the image of working region.From the image of working region
Extract screen area image.It further include extracting the corresponding screen of each aiming station in the image of working region when including multiple stations
Curtain area image.
Step S240 differentiates screen display state according to the Pixel Information of screen area image.
Pixel is to form the basic unit of image, contains the brightness value and chromatic value of image in Pixel Information.Work as work
Make screen area image to be extracted from the image of working region, using in screen area image there is no when staff in region
Pixel Information the display state of screen area image is differentiated.It in one embodiment, can be by obtaining screen area
Brightness value in area image Pixel Information differentiates the display state of screen according to brightness value, specifically, presets when brightness value is greater than
Determine that screen display state is otherwise bright screen judges screen display state for guard mode when threshold value.
In above-mentioned screen display state method of discrimination, characteristics of human body's inspection is carried out to the working region image got first
It surveys, when characteristics of human body is not present, triggers screen display state detection instruction, the Pixel Information by obtaining screen differentiates screen
Display state realizes automatic detection and differentiation to screen display state, improves the identification effect of screen display state.
In one embodiment, referring to FIG. 3, Fig. 3 is screen display state method of discrimination step in another embodiment
Flow diagram, method includes:
Step S210 obtains working region image.
Step S220 detects whether that there are characteristics of human body.
Step S222 is obtained and the associated personnel characteristics of human body of working region image when detecting characteristics of human body.
Wherein, personnel characteristics of human body is characteristics of human body corresponding with the associated staff of working region image.Generally,
The corresponding staff of one station, in other embodiments, a station can also correspond to multiple staff.With station
Associated staff is to have permission the personnel interacted with the display equipment on station.
In one embodiment, the corresponding staff of each station is acquired in advance, obtains the characteristics of human body of staff
Obtain personnel characteristics of human body, each station and personnel characteristics of human body be associated binding, and will be associated with binding information store to
In database.Specifically, server extracts station information from the image of working region, is searched from database according to station information
Associated personnel characteristics of human body calls matching algorithm by the personnel characteristics of human body found and the characteristics of human body detected progress
Match.Wherein station information can be station number, such as station number be 001, according to station number in the database search with
Associated personnel characteristics of human body.
Step S224 calculates the matching degree of characteristics of human body and personnel characteristics of human body.
Specifically, server calls matching algorithm by the characteristics of human body detected from the image of working region and has permission
Personnel characteristics of human body matched to obtain matching degree.Wherein, matching process may include the hand-characteristic of acquisition is matched,
Skeleton character carries out matching or face characteristic is matched, herein with no restrictions.
Matching degree is compared by step S226 with preset matching degree threshold value, when matching degree is less than preset matching degree threshold value
When, execute the step of S230 extracts screen area image from the image of working region.
Specifically, characteristics of human body and personnel characteristics of human body are inputted into matching algorithm, the matching degree of both matching algorithm output.
Specifically, matching algorithm can convert (Scale-invariant feature transform or SIFT) for scale invariant feature
Algorithm, or accelerate robust feature algorithm (Speeded Up Robust Features or SURF) etc..It is obtained according to matching algorithm
Matching degree is taken, when matching degree is greater than preset matching degree threshold value, determines that the characteristics of human body detected exactly belongs to permission
Staff's, when matching degree is not more than preset matching degree threshold value, determine that the characteristics of human body detected is without permission
The characteristics of human body of personnel extracts screen area image from the image of working region at this time.
Step S240 differentiates screen display state according to the Pixel Information of screen area image.
In this step, it is referred to above-described embodiment and differentiates that screen is shown to according to the Pixel Information of screen area image
The method of state description, details are not described herein.
It in the present embodiment, further include to the human body detected after detecting characteristics of human body from the image of working region
Feature is further differentiated, is matched by the characteristics of human body that will test with the personnel characteristics of human body with permission,
Judge whether the personnel in working region are legal personnel according to matching result, when not being legal personnel, triggering is extracted
The operation of screen area image differentiates the screen display state in screen area, and then improves the safety of information.
In one embodiment, after according to the Pixel Information of screen area image differentiation screen display state, comprising: when
Determine screen display state be bright screen when, screen display state is changed to screen protection state.
The display state of screen is the state that screen shows information.In one embodiment, the display state of screen can be with
It is divided into the bright screen state of screen and screen protection state, screen has luminance information in bright screen state, can show information;In
Transient copy can be protected in screen protection state, to prevent the information quilt on screen under this guard mode
Non- permission personnel steal.Specifically, screen protection state may include go out screen state and using outside plant to screen carry out
It blocks so that screen is in protected state.
In one embodiment, the screen under screen lock state has luminance information, differentiates that screen is in bright screen state.At it
In his embodiment, the displaying information on screen in screen lock state is stored into screen locking information database in advance, when server is sentenced
Not Chu screen display state when being bright screen, the displaying information on screen under bright screen state is obtained, when from screen locking information data storehouse matching
When to the displaying information on screen, determine that screen display state is guard mode.Wherein, displaying information on screen can be screensaver image
Or text information with screen locking mark etc..
In one embodiment, when characteristics of human body is not detected from the image of working region, and differentiate screen area figure
When as being bright screen state, screen display state is changed to screen protection state.In another embodiment, when from working region
When the characteristics of human body detected is non-permission personnel corresponding feature, judgement illegal personnel of personnel on station at this time, and
When differentiating that arriving screen display state is bright screen, screen display state is changed to screen protection state, is shown on screen with protecting
The information shown is not leaked, and further increases the safety of information.
In the present embodiment, when determine in the image of working region do not include staff or comprising non-permission work people
Member, but screen be bright screen state when, screen display state is changed to screen protection state in time, realizes the guarantor to information
Shield, prevents information to be stolen.
In one embodiment, screen display state is differentiated according to the Pixel Information of screen area image, comprising: obtain screen
Total pixel value of curtain area image;Bianry image is converted by screen area image, calculates the valid pixel value of bianry image;Meter
Calculate the ratio of valid pixel value and total pixel value;When ratio is greater than presetted pixel threshold value, judgement screen display state is bright screen,
When ratio is not more than presetted pixel threshold value, determine that screen display state is screen protection state.
Pixel is the smallest elementary area, and image is made of pixel.Obtain the process of the effective information in image namely
Obtain the process of Pixel Information in image.Server obtains total pixel value of screen area image, in one embodiment, total picture
Element value can be the product of the pixel in pixel and width direction on length direction in screen area image.
Bianry image is the image carried out after binary conversion treatment, and the brightness value of each pixel is 0 (black in bianry image
Color) or 255 (whites), that is, the effect that whole image presentation is only black and white.Specifically, first by the screen area of acquisition
Area image is converted to gray level image, and it is corresponding then to obtain screen area image to the gray level image progress binary conversion treatment of acquisition
Bianry image.
In one embodiment, the threshold value for carrying out binary conversion treatment to gray level image can be set to 60-130, such as can
Think 80, brightness value in gray level image is denoted as 255 greater than 80 value, the value less than 80 is denoted as 0, it is right in other embodiments
The selection of threshold value with no restrictions, can be arranged according to particular situation.
Valid pixel in bianry image refers to the pixel with luminance information.Specifically, by brightness value in bianry image
Effective pixel points are extracted as 255 pixel, calculate the ratio of the total pixel of effective pixel points Zhan, are assessed according to ratio current
Luminance information in screen area image.
In the present embodiment, screen area image is converted into bianry image, shape is shown to screen in screen area image
The judgement of state is transferred to and calculates accounting of the effective pixel points in total pixel in bianry image, determines current screen according to accounting
The display state of curtain area image, improves the identification effect to screen area image display status.
In one embodiment, method further include: obtain video flowing, extract multiple works from video flowing according to predeterminated frequency
Make area image;In multiple continuous working region images in preset duration, characteristics of human body is not detected, and differentiate
When screen display state is bright screen out, screen display state is changed to screen protection state.
Video flowing refers to the video data of transmission.In one embodiment, video flowing can be the workspace of real time monitoring
The video information in domain.It include multiple picture frames in video flowing, server intercepts out multiple from video flowing with preset sample frequency
Picture frame is as working region image.For example, server from monitor video with the decimation in frequency picture frame of 60 frame per second, and it is right
Each picture frame extracted carries out real-time characteristics of human body's detection, detects with the presence or absence of characteristics of human body in each figure phenomenon frame, and examining
It measures in picture frame when there is no characteristics of human body, extracts the screen area image in picture frame, and further according to screen area
Image discriminating screen display state.It is to detect human body in multiple continuous working region images in preset duration
When feature, and when to determine the screen display state in multiple continuous working region images be all bright screen, by the aobvious of screen
Show that state is changed to screen protection state.Wherein, during between the setting of preset duration should ensure that at this moment, staff is left
So that screen is in guard mode immediately after station, cannot still cause the leakage of information.Preset duration can be according to job specification
And the privacy degrees of action are dynamically set, and may be, for example, 1 minute or 5 minutes etc..
In one embodiment, preset duration is greater than the sampling time interval of the interception image frame from video flowing, to guarantee
A Zhang Gong is at least obtained in preset duration makees area image.In another embodiment, two are at least obtained in preset duration
Zhang Gong makees area image, to guarantee to judge by accident in primary working region image-detection process, such as will have work people
The working region of member is determined as that staff is not present, and screen display state is changed to screen protection state, influences work people
The working efficiency of member.
In the present embodiment, by obtaining working region image in real time according to certain frequency from monitor video, to obtaining
The working region image taken carries out characteristics of human body's detection and the judgement of screen display state, realizes to the real-time of monitor video
Processing.When staff being not present in a period of time continuous in working region but screen display state is bright screen, by screen
Display state is changed to screen protection state, realizes the protection to display information on screen.
And it can just be changed after staff is not present in working region and is continued for some time for the state of bright screen
The display state of screen is prevented from briefly leaving station as staff, be carried out in the case where will cause information leakage
The display state for changing screen, makes troubles to staff.
In one embodiment, detecting whether there is characteristics of human body in the image of working region, comprising: by working region image
Input human body attitude machine learning model;Key point coordinate is obtained according to human body attitude machine learning model;Connect each key point
Coordinate obtains profile diagram;When the similarity of profile diagram and human body contour outline figure is more than default similarity threshold, working region is determined
There are characteristics of human body in image, when similarity is less than default similarity threshold, determine that people is not present in the image of working region
Body characteristics.
The target of Attitude estimation is to depict the shape of human body in an image or a video in human body attitude machine learning model.
Human body attitude machine learning model includes DensePose model, OpenPose model, Realtime Multi-Person Pose
Estimation model, AlphaPose model, Human Body Pose Estimation model and DeepPose model
Deng.
The key point coordinate in the image of working region is obtained by human body attitude learning model, wherein key point coordinate can be
Constitute the key point coordinate of human body attitude.
In one embodiment, key point coordinate includes: skeleton joint point coordinate and hand joint point coordinate.Wherein, bone
Frame body joint point coordinate bpoints include human skeleton 25 major joint points, such as nose, neck, right shoulder, right hand elbow,
Right finesse, left shoulder, left hand elbow, left finesse, rumpbone, right hipbone, right knee, ankle, left hipbone, left knee, left ankle, the right side
Eye, left eye, auris dextra, left ear, right crus of diaphragm toe, left foot toe, left heel, right crus of diaphragm toe and right crus of diaphragm with.Hand joint point coordinate hpoints
42 major joint points comprising human body both hands, each 21 points of right-hand man, by taking the right hand as an example, including the palm root, the palm abdomen, thumb root,
In thumb, thumbtip, index finger root, index finger be close, in index finger, forefinger tip, middle finger root, middle finger be close, in middle finger, middle finger tip, the third finger
In root, the nameless close, third finger, unknown finger tip, little finger root, little finger be close, in little finger and little finger point.
Specifically, server obtains video flowing, extracts picture frame from video flowing according to predeterminated frequency, mentions from picture frame
Original operating region image is taken, original operating region image is normalized to obtain working region image.Wherein normalizing
Change processing includes the picture specification by original operating region image procossing at suitable human body attitude machine learning model.In a reality
It applies in example, by obtaining the size of the training set image in human body attitude learning model, extremely by original operating region Image Adjusting
Working region image is obtained with the consistent size of training set image.
Working region image is inputted into OpenPose human body attitude machine learning model, according to human body attitude machine learning mould
Type obtains key point coordinate, connects each key point coordinate and obtains profile diagram, when the similarity of profile diagram and human body contour outline figure is more than
When default similarity threshold, determine that there are characteristics of human body in the image of working region, when similarity is less than default similarity threshold
When, determine that characteristics of human body is not present in the image of working region, wherein default similarity threshold may be greater than 80% etc., does not make herein
Limitation.
In the present embodiment, using human body attitude learning model automatic identification key point coordinate and according to the key recognized
It puts coordinate to discriminate whether to belong to characteristics of human body, improves the efficiency and accuracy rate of characteristics of human body's identification.
In one embodiment, screen area image is extracted from the image of working region, comprising: working region image is defeated
Enter target detection machine learning model, obtains the location information and matching probability of matching area under multiple scale match patterns;It obtains
Take the location information of the maximum matching area of matching probability;Screen area corresponding with location information is extracted from the image of working region
Area image.
In order to more accurately extract screen area image from the image of working region, working region image is inputted first
Target detection model obtains the image of working region image at multiple scales according to target detection model.Wherein, acquisition is more
The fog-level of scale image becomes larger, and simulates scenery from the near to the remote in retina forming process, both contains the overall situation
Global Information, and contain local details, more fully information can be extracted.
It is multiple dimensioned to obtain screen area image most by considering in the case where not knowing screen area picture size
Good scale improves the efficiency and accuracy that screen area image is extracted in the image of working region.
According to multiple scale images of acquisition, the zone position information in each scale image where object to be matched is obtained,
And the probability of region location information.Specifically, obtain location information of the screen area in each scale image and
With probability, wherein location information can pass through coordinate representation.Server mentions the corresponding image of the maximum location information of matching probability
It is taken as screen area, specifically, screen area image is extracted from the image of working region according to the corresponding coordinate of location information.Its
In, target detection machine learning model can be convolutional neural networks model, YOLOV model etc., and wherein YOLOV model is divided into again
YOLOV1 model, YOLOV2 model and YOLOV3 model.
In one embodiment, screen area image is obtained using YOLOV3 model.Working region image is carried out first
Normalized obtains the picture size for being suitble to YOLOV3 model, and the working region image after obtained normalization is inputted
YOLOV3 model, image of the YOLOV3 model extraction working region image under a variety of scales, and obtain multiple scale matching moulds
The location information and matching probability of matching area under formula obtain the location information of the maximum matching area of matching probability, from work
Screen area image corresponding with location information is extracted in area image.
In the present embodiment, by target detection machine learning model realize to the automatic identification of screen area image with mention
It takes, realizes the automatic processing and detection of data, improve the efficiency and accuracy of image procossing.
It should be understood that although each step in the flow chart of Fig. 2-3 is successively shown according to the instruction of arrow,
These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps
Execution there is no stringent sequences to limit, these steps can execute in other order.Moreover, at least one in Fig. 2-3
Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps
Completion is executed, but can be executed at different times, the execution sequence in these sub-steps or stage is also not necessarily successively
It carries out, but can be at least part of the sub-step or stage of other steps or other steps in turn or alternately
It executes.
In one embodiment, as shown in figure 4, providing a kind of screen display state discriminating gear, comprising: image obtains
Module 410, characteristics of human body's detection module 420, screen picture extraction module 430 and the first display condition discrimination module 440.
Image collection module 410, for obtaining working region image.
Characteristics of human body's detection module 420, for detecting in the working region image with the presence or absence of characteristics of human body.
Screen picture extraction module 430, for when the characteristics of human body is not detected, from the working region image
Extract screen area image.
First display condition discrimination module 440, for differentiating the screen according to the Pixel Information of the screen area image
Curtain display state.
In one embodiment, described device further include:
Personnel characteristics obtain module, for obtaining and closing with the working region image when detecting the characteristics of human body
The personnel characteristics of human body of connection.
Matching degree computing module, for calculating the matching degree of the characteristics of human body Yu the personnel characteristics of human body.
Second display condition discrimination module, is used for when the matching degree is less than preset matching degree threshold value, from the work
Screen area image is extracted in area image, differentiates that the screen shows shape according to the Pixel Information of the screen area image
State.
In one embodiment, device further include:
State change module is shown, for when determining the screen display state is bright screen, the screen to be shown
State is changed to screen protection state.
In one embodiment, the first display condition discrimination module 440, comprising:
Total pixel value acquiring unit, for obtaining total pixel value of the screen area image;
Valid pixel value computing unit calculates the two-value for converting bianry image for the screen area image
The valid pixel value of image;
Ratio calculation unit, for calculating the ratio of the valid pixel value Yu total pixel value;
First display condition discrimination unit, for determining that the screen is aobvious when the ratio is greater than presetted pixel threshold value
Show that state is bright screen, when the ratio is not more than presetted pixel threshold value, determines the screen display state for screen protection shape
State.
In one embodiment, device further include:
Multiple image collection modules extract multiple works according to predeterminated frequency for obtaining video flowing from the video flowing
Make area image;
Third shows condition discrimination module, for when multiple continuous described working region images in preset duration
In, when the characteristics of human body is not detected, and determining the screen display state and is bright screen, the screen is shown into shape
State is changed to screen protection state.
In one embodiment, characteristics of human body's detection module 420, comprising:
First input unit, for the working region image to be inputted human body attitude machine learning model;
Coordinate acquiring unit, for obtaining key point coordinate according to the human body attitude machine learning model;
Profile diagram acquiring unit obtains profile diagram for connecting each key point coordinate;
Second display condition discrimination unit, it is similar more than presetting to the similarity of human body contour outline figure for working as the profile diagram
When spending threshold value, determine that there are characteristics of human body in the working region image, when the similarity is less than default similarity threshold
When, determine that there is no characteristics of human body in the working region image.
In one embodiment, screen picture extraction module 430, comprising:
Second input unit obtains multiple for the working region image to be inputted target detection machine learning model
The location information and matching probability of matching area under resolution match mode;
Position acquisition unit, for obtaining the location information of the maximum matching area of the matching probability;
Screen area image acquisition unit, it is corresponding with the location information for being extracted from the working region image
Screen area image.
Specific restriction about screen display state discriminating gear may refer to differentiate above for screen display state
The restriction of method, details are not described herein.Modules in above-mentioned screen display state discriminating gear can be fully or partially through
Software, hardware and combinations thereof are realized.Above-mentioned each module can be embedded in the form of hardware or independently of the place in computer equipment
It manages in device, can also be stored in a software form in the memory in computer equipment, in order to which processor calls execution or more
The corresponding operation of modules.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction
Composition can be as shown in Figure 5.The computer equipment include by system bus connect processor, memory, network interface and
Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment
Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data
Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating
The database of machine equipment differentiates data for storing screen display state.The network interface of the computer equipment is used for and outside
Terminal passes through network connection communication.To realize a kind of screen display state differentiation side when the computer program is executed by processor
Method.
It will be understood by those skilled in the art that structure shown in Fig. 5, only part relevant to application scheme is tied
The block diagram of structure does not constitute the restriction for the computer equipment being applied thereon to application scheme, specific computer equipment
It may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment, including memory and processor are provided, which is stored with
Computer program, which performs the steps of when executing computer program obtains working region image;Detect the work
It whether there is characteristics of human body in area image;When the characteristics of human body is not detected, extracted from the working region image
Screen area image;The screen display state is differentiated according to the Pixel Information of the screen area image.
It is also performed the steps of when processor executes computer program in one of the embodiments, described when detecting
When characteristics of human body, obtain and the associated personnel characteristics of human body of the working region image;Calculate the characteristics of human body and the people
The matching degree of member characteristics of human body;When the matching degree is less than preset matching degree threshold value, extracted from the working region image
Screen area image differentiates the screen display state according to the Pixel Information of the screen area image.
The picture according to the screen area image is realized when processor executes computer program in one of the embodiments,
Prime information is also used to when differentiating the step after the screen display state: being bright screen when determining the screen display state
When, the screen display state is changed to screen protection state.
The picture according to the screen area image is realized when processor executes computer program in one of the embodiments,
Prime information is also used to when differentiating the step of the screen display state: obtaining total pixel value of the screen area image;By institute
It states screen area image and is converted into bianry image, calculate the valid pixel value of the bianry image;Calculate the valid pixel value
With the ratio of total pixel value;When the ratio is greater than presetted pixel threshold value, determine that the screen display state is bright screen,
When the ratio is not more than presetted pixel threshold value, determine the screen display state for screen protection state.
Acquisition video flowing is also performed the steps of when processor executes computer program in one of the embodiments, is pressed
Multiple working region images are extracted from the video flowing according to predeterminated frequency;When multiple continuous described works in preset duration
It, will be described when making in area image, the characteristics of human body is not detected, and determining the screen display state and be bright screen
Screen display state is changed to screen protection state.
Realizing to detect in the working region image when processor executes computer program in one of the embodiments, is
No there are be also used to when the step of characteristics of human body: the working region image is inputted human body attitude machine learning model;According to
The human body attitude machine learning model obtains key point coordinate;It connects each key point coordinate and obtains profile diagram;When described
When the similarity of profile diagram and human body contour outline figure is more than default similarity threshold, determine that there are human bodies in the working region image
Feature determines that there is no characteristics of human body in the working region image when the similarity is less than default similarity threshold.
It realizes when processor executes computer program in one of the embodiments, and is extracted from the working region image
It is also used to when the step of screen area image: the working region image being inputted into target detection machine learning model, is obtained more
The location information and matching probability of matching area under a resolution match mode;Obtain the maximum matching of the matching probability
The location information in region;Screen area image corresponding with the location information is extracted from the working region image.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program performs the steps of when being executed by processor obtains working region image;Detect in the working region image whether
There are characteristics of human body;When the characteristics of human body is not detected, screen area image is extracted from the working region image;Root
The screen display state is differentiated according to the Pixel Information of the screen area image.
It also performs the steps of to work as when computer program is executed by processor in one of the embodiments, and detects institute
When stating characteristics of human body, obtain and the associated personnel characteristics of human body of the working region image;Calculate the characteristics of human body with it is described
The matching degree of personnel characteristics of human body;When the matching degree is less than preset matching degree threshold value, mentioned from the working region image
Screen area image is taken, the screen display state is differentiated according to the Pixel Information of the screen area image.
It realizes when computer program is executed by processor in one of the embodiments, according to the screen area image
Pixel Information is also used to when differentiating the step after the screen display state: being bright screen when determining the screen display state
When, the screen display state is changed to screen protection state.
It realizes when computer program is executed by processor in one of the embodiments, according to the screen area image
Pixel Information is also used to when differentiating the step of the screen display state: obtaining total pixel value of the screen area image;It will
The screen area image is converted into bianry image, calculates the valid pixel value of the bianry image;Calculate the valid pixel
The ratio of value and total pixel value;When the ratio is greater than presetted pixel threshold value, determine that the screen display state is bright
Screen determines the screen display state for screen protection state when the ratio is not more than presetted pixel threshold value.
Acquisition video flowing is also performed the steps of when computer program is executed by processor in one of the embodiments,
Multiple working region images are extracted from the video flowing according to predeterminated frequency;When multiple in preset duration are continuous described
When in the image of working region, the characteristics of human body is not detected, and determining the screen display state and is bright screen, by institute
It states screen display state and is changed to screen protection state.
It realizes and is detected in the working region image when computer program is executed by processor in one of the embodiments,
With the presence or absence of characteristics of human body step when be also used to: by the working region image input human body attitude machine learning model;Root
Key point coordinate is obtained according to the human body attitude machine learning model;It connects each key point coordinate and obtains profile diagram;Work as institute
The similarity for stating profile diagram and human body contour outline figure is more than when presetting similarity threshold, to determine that there are people in the working region image
Body characteristics determine that there is no human body spies in the working region image when the similarity is less than default similarity threshold
Sign.
It realizes when computer program is executed by processor in one of the embodiments, and is mentioned from the working region image
It is also used to when taking the step of screen area image: the working region image being inputted into target detection machine learning model, is obtained
The location information and matching probability of matching area under multiple resolution match modes;Obtain the matching probability maximum described
Location information with region;Screen area image corresponding with the location information is extracted from the working region image.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer
In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,
To any reference of memory, storage, database or other media used in each embodiment provided herein,
Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art
It says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the application
Range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (10)
1. a kind of screen display state method of discrimination, which comprises
Obtain working region image;
It detects in the working region image with the presence or absence of characteristics of human body;
When the characteristics of human body is not detected, screen area image is extracted from the working region image;
The screen display state is differentiated according to the Pixel Information of the screen area image.
2. the method according to claim 1, wherein the method also includes:
When detecting the characteristics of human body, obtain and the associated personnel characteristics of human body of the working region image;
Calculate the matching degree of the characteristics of human body Yu the personnel characteristics of human body;
When the matching degree is less than preset matching degree threshold value, screen area image, root are extracted from the working region image
The screen display state is differentiated according to the Pixel Information of the screen area image.
3. according to the method described in claim 2, it is characterized in that, the Pixel Information according to the screen area image is sentenced
After the not described screen display state, comprising:
When determining the screen display state is bright screen, the screen display state is changed to screen protection state.
4. method according to claim 1 or 2, which is characterized in that described to be believed according to the pixel of the screen area image
Breath differentiates the screen display state, comprising:
Obtain total pixel value of the screen area image;
Bianry image is converted by the screen area image, calculates the valid pixel value of the bianry image;
Calculate the ratio of the valid pixel value Yu total pixel value;
When the ratio is greater than presetted pixel threshold value, determine that the screen display state is bright screen, when the ratio is not more than
When presetted pixel threshold value, determine the screen display state for screen protection state.
5. the method according to claim 1, wherein the method also includes:
Video flowing is obtained, extracts multiple working region images from the video flowing according to predeterminated frequency;
In multiple continuous described working region images in preset duration, the characteristics of human body is not detected, and sentence
Not Chu screen display state when being bright screen, the screen display state is changed to screen protection state.
6. the method according to claim 1, wherein whether there is people in the detection working region image
Body characteristics, comprising:
The working region image is inputted into human body attitude machine learning model;
Key point coordinate is obtained according to the human body attitude machine learning model;
It connects each key point coordinate and obtains profile diagram;
When the similarity of the profile diagram and human body contour outline figure is more than default similarity threshold, the working region image is determined
In there are characteristics of human body, when the similarity is less than default similarity threshold, determine not deposit in the working region image
In characteristics of human body.
7. the method according to claim 1, wherein described extract screen area from the working region image
Image, comprising:
The working region image is inputted into target detection machine learning model, obtains matching area under multiple scale match patterns
Location information and matching probability;
Obtain the location information of the maximum matching area of the matching probability;
Screen area image corresponding with the location information is extracted from the working region image.
8. a kind of screen display state discriminating gear, described device include:
Image collection module, for obtaining working region image;
Characteristics of human body's detection module, for detecting in the working region image with the presence or absence of characteristics of human body;
Screen picture extraction module, for extracting screen from the working region image when the characteristics of human body is not detected
Curtain area image;
First display condition discrimination module, for differentiating that the screen shows shape according to the Pixel Information of the screen area image
State.
9. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists
In the step of processor realizes any one of claims 1 to 7 the method when executing the computer program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method described in any one of claims 1 to 7 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910773812.2A CN110490140A (en) | 2019-08-21 | 2019-08-21 | Screen display state method of discrimination, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910773812.2A CN110490140A (en) | 2019-08-21 | 2019-08-21 | Screen display state method of discrimination, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110490140A true CN110490140A (en) | 2019-11-22 |
Family
ID=68552499
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910773812.2A Pending CN110490140A (en) | 2019-08-21 | 2019-08-21 | Screen display state method of discrimination, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110490140A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116522417A (en) * | 2023-07-04 | 2023-08-01 | 广州思涵信息科技有限公司 | Security detection method, device, equipment and storage medium for display equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108881621A (en) * | 2018-05-30 | 2018-11-23 | 上海与德科技有限公司 | A kind of screen locking method, device, terminal and storage medium |
CN109147659A (en) * | 2018-08-23 | 2019-01-04 | 西安蜂语信息科技有限公司 | Control method for screen display, device, equipment and storage medium |
CN109521875A (en) * | 2018-10-31 | 2019-03-26 | 联想(北京)有限公司 | A kind of screen control method, electronic equipment and computer readable storage medium |
CN110046600A (en) * | 2019-04-24 | 2019-07-23 | 北京京东尚科信息技术有限公司 | Method and apparatus for human testing |
-
2019
- 2019-08-21 CN CN201910773812.2A patent/CN110490140A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108881621A (en) * | 2018-05-30 | 2018-11-23 | 上海与德科技有限公司 | A kind of screen locking method, device, terminal and storage medium |
CN109147659A (en) * | 2018-08-23 | 2019-01-04 | 西安蜂语信息科技有限公司 | Control method for screen display, device, equipment and storage medium |
CN109521875A (en) * | 2018-10-31 | 2019-03-26 | 联想(北京)有限公司 | A kind of screen control method, electronic equipment and computer readable storage medium |
CN110046600A (en) * | 2019-04-24 | 2019-07-23 | 北京京东尚科信息技术有限公司 | Method and apparatus for human testing |
Non-Patent Citations (1)
Title |
---|
雷李辉: "基于图像识别的计算机自动锁屏系统的研究", 《保密科学技术》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116522417A (en) * | 2023-07-04 | 2023-08-01 | 广州思涵信息科技有限公司 | Security detection method, device, equipment and storage medium for display equipment |
CN116522417B (en) * | 2023-07-04 | 2023-09-19 | 广州思涵信息科技有限公司 | Security detection method, device, equipment and storage medium for display equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ahmed et al. | Vision based hand gesture recognition using dynamic time warping for Indian sign language | |
CN105893920B (en) | Face living body detection method and device | |
CN110502986A (en) | Identify character positions method, apparatus, computer equipment and storage medium in image | |
CN110532988B (en) | Behavior monitoring method and device, computer equipment and readable storage medium | |
CN105981046A (en) | Fingerprint authentication using stitch and cut | |
JP7151875B2 (en) | Image processing device, image processing method, and program | |
CN107844742B (en) | Facial image glasses minimizing technology, device and storage medium | |
CN111429476B (en) | Method and device for determining action track of target person | |
WO2021217899A1 (en) | Method, apparatus, and device for encrypting display information, and storage medium | |
EP3869448A1 (en) | Iris authentication device, iris authentication method, and recording medium | |
CN110036407B (en) | System and method for correcting digital image color based on human sclera and pupil | |
CN111857334A (en) | Human body gesture letter recognition method and device, computer equipment and storage medium | |
CN114758362A (en) | Clothing changing pedestrian re-identification method based on semantic perception attention and visual masking | |
CN108228742B (en) | Face duplicate checking method and device, electronic equipment, medium and program | |
CN110991231B (en) | Living body detection method and device, server and face recognition equipment | |
Thamaraimanalan et al. | Multi biometric authentication using SVM and ANN classifiers | |
US20230222842A1 (en) | Improved face liveness detection using background/foreground motion analysis | |
CN110490140A (en) | Screen display state method of discrimination, device, computer equipment and storage medium | |
Dhruva et al. | Novel algorithm for image processing based hand gesture recognition and its application in security | |
CN112532884B (en) | Identification method and device and electronic equipment | |
CN111274602B (en) | Image characteristic information replacement method, device, equipment and medium | |
CN113435353A (en) | Multi-mode-based in-vivo detection method and device, electronic equipment and storage medium | |
CN109858464A (en) | Bottom library data processing method, face identification method, device and electronic equipment | |
Kennedy et al. | Implementation of an Embedded Masked Face Recognition System using Huskylens System-On-Chip Module | |
CN116226817A (en) | Identity recognition method, identity recognition device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191122 |
|
RJ01 | Rejection of invention patent application after publication |