CN107742071A - The equipment unlocking method and electronic installation of online game - Google Patents
The equipment unlocking method and electronic installation of online game Download PDFInfo
- Publication number
- CN107742071A CN107742071A CN201710813516.1A CN201710813516A CN107742071A CN 107742071 A CN107742071 A CN 107742071A CN 201710813516 A CN201710813516 A CN 201710813516A CN 107742071 A CN107742071 A CN 107742071A
- Authority
- CN
- China
- Prior art keywords
- image
- depth
- equipment
- personage
- online game
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/131—Protocols for games, networked simulations or virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/21—Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/2133—Verifying human interaction, e.g., Captcha
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Security & Cryptography (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a kind of equipment unlocking method of online game, for electronic installation.The equipment unlocking method of online game includes:Obtain the scene image of active user;Obtain the depth image of active user;Processing scene image and depth image obtain personage's area image to extract people's object area of the active user in scene image;Processing is unlocked to equipment based on personage's area image.Influenceed because the acquisition of depth image is not easy the factor such as COLOR COMPOSITION THROUGH DISTRIBUTION in by illumination, scene, therefore, the people's object area extracted by depth image is more accurate, the border of people's object area can especially be gone out with accurate calibration, and, processing is unlocked to equipment based on personage's area image, the security equipped in online game can be lifted.
Description
Technical field
The present invention relates to equipment unlocking method and the electronics dress of field of computer technology, more particularly to a kind of online game
Put.
Background technology
In correlation technique, usually using feature point extraction character contour, but it is accurate using the character contour of feature point extraction
Degree is not high, especially can not accurate calibration go out the border of personage, the effect of character image extracted region is influenceed, based on character image area
Precision when domain is unlocked in online game to equipment is not high.
The content of the invention
The embodiment provides the equipment of a kind of equipment unlocking method of online game, online game to unlock dress
Put, electronic installation and computer-readable recording medium.
The equipment unlocking method of the online game of embodiment of the present invention is used for electronic installation, the equipment of the online game
Unlocking method includes:
Obtain the scene image of active user;
Obtain the depth image of the active user;
The scene image and the depth image are handled to extract people of the active user in the scene image
Object area and obtain personage's area image;
Processing is unlocked to the equipment based on personage's area image.
The equipment tripper of the online game of embodiment of the present invention, for electronic installation.The dress of the online game
Standby tripper includes:Visible image capturing head, the visible image capturing head are used for the scene image for obtaining active user;Depth map
As acquisition component, the depth image acquisition component is used for the depth image for obtaining the active user;And processor, the place
Reason device is used for:The scene image and the depth image are handled to extract people of the active user in the scene image
Object area and obtain personage's area image;Processing is unlocked to the equipment based on personage's area image.
The electronic installation of embodiment of the present invention includes one or more processors, memory and one or more programs.
Wherein one or more of programs are stored in the memory, and are configured to by one or more of processors
Perform, described program includes being used to perform the instruction of the equipment unlocking method of above-mentioned online game.
The computer-readable recording medium of embodiment of the present invention includes what is be used in combination with the electronic installation that can be imaged
Computer program, the computer program can be executed by processor to complete the equipment unlocking method of above-mentioned online game.
The equipment unlocking method of the online game of embodiment of the present invention, the equipment tripper of online game, electronics dress
Put with computer-readable recording medium by obtaining the depth image of active user with by personage's extracted region in scene image
Out.Influenceed because the acquisition of depth image is not easy the factor such as COLOR COMPOSITION THROUGH DISTRIBUTION in by illumination, scene, therefore, pass through depth map
As the people's object area extracted is more accurate, it is particularly possible to which accurate calibration goes out the border of people's object area, also, is based on people's object area
Image is unlocked processing to equipment, can lift the security equipped in online game.
The additional aspect and advantage of the present invention will be set forth in part in the description, and will partly become from the following description
Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments
Substantially and it is readily appreciated that, wherein:
Fig. 1 is the schematic flow sheet of the equipment unlocking method of the online game of some embodiments of the present invention;
Fig. 2 is the module diagram of the equipment tripper of the online game of some embodiments of the present invention;
Fig. 3 is the structural representation of the electronic installation of some embodiments of the present invention;
Fig. 4 is the schematic flow sheet of the equipment unlocking method of the online game of some embodiments of the present invention;
Fig. 5 is the schematic flow sheet of the equipment unlocking method of the online game of some embodiments of the present invention;
Fig. 6 (a) to Fig. 6 (e) is the schematic diagram of a scenario of structural light measurement according to an embodiment of the invention;
Fig. 7 (a) and Fig. 7 (b) structural light measurements according to an embodiment of the invention schematic diagram of a scenario;
Fig. 8 is the schematic flow sheet of the equipment unlocking method of the online game of some embodiments of the present invention;
Fig. 9 is the schematic flow sheet of the equipment unlocking method of the online game of some embodiments of the present invention;
Figure 10 is the module diagram of the electronic installation of some embodiments of the present invention;
Figure 11 is the module diagram of the electronic installation of some embodiments of the present invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end
Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached
The embodiment of figure description is exemplary, is only used for explaining the present invention, and is not considered as limiting the invention.On the contrary, this
All changes that the embodiment of invention includes falling into the range of the spirit and intension of attached claims, modification and equivalent
Thing.
Fig. 1 is the schematic flow sheet of the equipment unlocking method for the online game that one embodiment of the invention proposes.
Referring to Fig. 1, this method includes:
S101:Obtain the scene image of active user.
S102:Obtain the depth image of active user.
S103:Processing scene image and depth image are obtained with extracting people's object area of the active user in scene image
Personage's area image.
S104:Processing is unlocked to equipment based on personage's area image.
Alternatively, it is described that processing is unlocked to the equipment based on personage's area image, including:Extract the people
Characteristic point corresponding to object area image;Judge whether the characteristic point matches with default characteristic point;If matching, trigger to described
Equipment is unlocked processing;If mismatch, it is determined that to the equipment unblock failure, by the depth image for gathering active user
To extract personage's area image, and then processing is unlocked to equipment, unblock precision is higher, can effectively lift online game
It is middle to equip the security used.
Also referring to Fig. 1 to 2, the equipment unlocking method of the online game of embodiment of the present invention can be used for electronics dress
Put 1000.
Referring to Fig. 3, the equipment unlocking method of the online game of embodiment of the present invention can be by embodiment of the present invention
The equipment tripper 100 of online game realize.The equipment tripper 100 of the online game of embodiment of the present invention is used for
Electronic installation 1000.The equipment tripper 100 of online game includes that visible image capturing is first 11, the and of depth image acquisition component 12
Processor 20.S101 can realize that S102 can be realized by depth image acquisition component 12, S103- by visible image capturing first 11
S105 can be realized by processor 20.
In other words, it is seen that light video camera head 11 can be used for the scene image for obtaining active user;Depth image acquisition component
12 can be used for the depth image of acquisition active user;It is current to extract that processor 20 can be used for processing scene image and depth image
People object area of the user in scene image and obtain personage's area image, place is unlocked to equipment based on personage's area image
Reason.
Wherein, it can be gray level image or coloured image that scene image, which is, and depth image characterizes the field for including active user
Each personal or object depth information in scape.The scene domain of scene image is consistent with the scene domain of depth image, and scene
Each pixel in image can be found in depth image to should pixel depth information.
The equipment tripper 100 of the online game of embodiment of the present invention can apply to the electricity of embodiment of the present invention
Sub-device 1000.In other words, the electronic installation 1000 of embodiment of the present invention includes the online game of embodiment of the present invention
Equipment tripper 100.
In some embodiments, electronic installation 1000 includes mobile phone, tablet personal computer, notebook computer, Intelligent bracelet, intelligence
Energy wrist-watch, intelligent helmet, intelligent glasses etc..
The method of existing segmentation personage and background according to similitude of the adjacent pixel in terms of pixel value and does not connect mainly
Continuous property carries out the segmentation of personage and background, but this dividing method is easily influenceed by environmental factors such as ambient light photographs.It is of the invention real
The equipment unlocking method of the online game of mode, the equipment tripper 100 of online game and electronic installation 1000 are applied by obtaining
The depth image of active user is taken so that personage's extracted region in scene image to be come out.Due to the acquisition of depth image be not easy by
The influence of the factor such as COLOR COMPOSITION THROUGH DISTRIBUTION in illumination, scene, therefore, the people's object area extracted by depth image is more accurate, especially
It can go out the border of people's object area with accurate calibration.Further, more accurately personage's area image and predetermined two-dimensional background
Merge the better of image after fusion.
Referring to Fig. 4, in some embodiments, S102:The depth image of active user is obtained, can be included:
S401:To active user's projective structure light.
S402:The structure light image that shooting is modulated through active user.
S403:Phase information corresponding to each pixel of demodulation structure light image is to obtain depth image.
Referring again to Fig. 3, in some embodiments, depth image acquisition component 12 includes the He of structured light projector 121
Structure light video camera head 122.S401 can be realized that S402 and S403 can be by structure light video camera heads 122 by structured light projector 121
Realize.
In other words, structured light projector 121 can be used for active user's transmittance structure light;Structure light video camera head 122 can
For shooting the structure light image modulated through active user, and phase information corresponding to each pixel of demodulation structure light image
To obtain depth image.
Specifically, structured light projector 121 is by the face and body of the project structured light of certain pattern to active user
Afterwards, the structure light image after being modulated by active user can be formed in the face of active user and the surface of body.Structure light images
Structure light image after first 122 shooting is modulated, then structure light image is demodulated to obtain depth image.Wherein, structure
The pattern of light can be laser stripe, Gray code, sine streak, non-homogeneous speckle etc..
Referring to Fig. 5, in some embodiments, phase information corresponding to each pixel of S403 demodulation structure light images
To obtain depth image, can include:
S501:Phase information corresponding to each pixel in demodulation structure light image.
S502:Phase information is converted into depth information.
S503:Depth image is generated according to depth information.
Referring again to Fig. 2, in some embodiments, S501, S502 and S503 can be real by structure light video camera head 122
It is existing.
In other words, structure light video camera head 122 can be further used in demodulation structure light image phase corresponding to each pixel
Position information, phase information is converted into depth information, and depth image is generated according to depth information.
Specifically, compared with non-modulated structure light, the phase information of the structure light after modulation is changed, and is being tied
The structure light showed in structure light image is to generate the structure light after distortion, wherein, the phase information of change can characterize
The depth information of object.Therefore, structure light video camera head 122 demodulates phase corresponding to each pixel in structure light image and believed first
Breath, calculates depth information, so as to obtain final depth image further according to phase information.
In order that those skilled in the art is more apparent from gathering the face of active user and body according to structure
The process of the depth image of body, illustrate it by taking a kind of widely used optical grating projection technology (fringe projection technology) as an example below
Concrete principle.Wherein, optical grating projection technology belongs to sensu lato area-structure light.
As shown in Fig. 6 (a), when being projected using area-structure light, sine streak is produced by computer programming first,
And sine streak is projected to measured object by structured light projector 121, recycle structure light video camera head 122 to shoot striped by thing
Degree of crook after body modulation, then demodulates the curved stripes and obtains phase, then phase is converted into depth information to obtain
Depth image.The problem of to avoid producing error or error coupler, needed before carrying out depth information collection using structure light to depth
Image collection assembly 12 carries out parameter calibration, and demarcation includes geometric parameter (for example, structure light video camera head 122 and project structured light
Relative position parameter between device 121 etc.) demarcation, the inner parameter and structured light projector 121 of structure light video camera head 122
The demarcation of inner parameter etc..
Specifically, the first step, computer programming produce sine streak.Need to obtain using the striped of distortion due to follow-up
Phase, for example phase is obtained using four step phase-shifting methods, therefore the striped that four width phase differences are pi/2, then structure light are produced here
The projector 121 projects the four spokes line timesharing on measured object (mask shown in Fig. 6 (a)), and structure light video camera head 122 gathers
To the figure on such as Fig. 6 (b) left sides, while to read the striped of the plane of reference shown on the right of Fig. 6 (b).
Second step, carry out phase recovery.The bar graph that structure light video camera head 122 is modulated according to four width collected is (i.e.
Structure light image) to calculate the phase diagram by phase modulation, now obtained be to block phase diagram.Because four step Phase-shifting algorithms obtain
Result be that gained is calculated by arctan function, therefore the phase after structure light modulation is limited between [- π, π], that is,
Say, the phase after modulation exceedes [- π, π], and it can restart again.Shown in the phase main value such as Fig. 6 (c) finally given.
Wherein, it is necessary to carry out the saltus step processing that disappears, it is continuous phase that will block phase recovery during phase recovery is carried out
Position.As shown in Fig. 6 (d), the left side is the continuous phase bitmap modulated, and the right is to refer to continuous phase bitmap.
3rd step, subtract each other to obtain phase difference (i.e. phase information) by the continuous phase modulated and with reference to continuous phase, should
Phase difference characterizes depth information of the measured object with respect to the plane of reference, then phase difference is substituted into the conversion formula (public affairs of phase and depth
The parameter being related in formula is by demarcation), you can obtain the threedimensional model of the object under test as shown in Fig. 6 (e).
It should be appreciated that in actual applications, according to the difference of concrete application scene, employed in the embodiment of the present invention
Structure light in addition to above-mentioned grating, can also be other arbitrary graphic patterns.
As a kind of possible implementation, the depth information of pattern light progress active user also can be used in the present invention
Collection.
Specifically, the method that pattern light obtains depth information is that this spreads out using a diffraction element for being essentially flat board
The relief diffraction structure that there are element particular phases to be distributed is penetrated, cross section is with two or more concavo-convex step embossment knots
Structure.Substantially 1 micron of the thickness of substrate in diffraction element, each step it is highly non-uniform, the span of height can be 0.7
Micron~0.9 micron.Structure shown in Fig. 7 (a) is the local diffraction structure of the collimation beam splitting element of the present embodiment.Fig. 7 (b) is edge
The unit of the cross sectional side view of section A-A, abscissa and ordinate is micron.The speckle pattern of pattern photogenerated has
The randomness of height, and can with the difference of distance changing patterns.Therefore, depth information is being obtained using pattern light
Before, it is necessary first to the speckle pattern in space is calibrated, for example, in the range of 0~4 meter of distance structure light video camera head 122,
A reference planes are taken every 1 centimetre, then just save 400 width speckle images after demarcating, the spacing of demarcation is smaller, obtains
Depth information precision it is higher.Then, structured light projector 121 is by pattern light projection to measured object (i.e. active user)
On, the speckle pattern that the difference in height on measured object surface to project the pattern light on measured object changes.Structure light
Camera 122 is shot project speckle pattern (i.e. structure light image) on measured object after, then by speckle pattern and demarcation early stage
The 400 width speckle images preserved afterwards carry out computing cross-correlation one by one, and then obtain 400 width correlation chart pictures.Measured object in space
Position where body can show peak value on correlation chart picture, above-mentioned peak value is superimposed and after interpolation arithmetic i.e.
It can obtain the depth information of measured object.
Most diffraction lights are obtained after diffraction is carried out to light beam due to common diffraction element, but per beam diffraction light light intensity difference
Greatly, it is also big to the risk of human eye injury.Re-diffraction even is carried out to diffraction light, the uniformity of obtained light beam is relatively low.
Therefore, the effect projected using the light beam of common diffraction element diffraction to measured object is poor.Using collimation in the present embodiment
Beam splitting element, the element not only have the function that to collimate uncollimated rays, also have the function that light splitting, i.e., through speculum
The non-collimated light of reflection projects multi-beam collimation light beam, and the multi-beam collimation projected after collimating beam splitting element toward different angles
The area of section approximately equal of light beam, flux of energy approximately equal, and then to carry out using the scatterplot light after the beam diffraction
The effect of projection is more preferable.Meanwhile laser emitting light is dispersed to every light beam, the risk of injury human eye is reduce further, and dissipate
Spot structure light is for other uniform structure lights of arrangement, when reaching same collection effect, the consumption of pattern light
Electricity is lower.
Referring to Fig. 8, in some embodiments, S103 processing scene images and depth image are existed with extracting active user
People's object area in scene image and obtain personage's area image, can include
S801:Identify the human face region in scene image.
S802:Depth information corresponding with human face region is obtained from depth image.
S803:The depth bounds of people's object area is determined according to the depth information of human face region.
S804:The personage area for determining to be connected and fallen into depth bounds with human face region according to the depth bounds of people's object area
Domain is to obtain personage's area image.
Referring again to Fig. 2, in some embodiments, S801, S802, S803 and S804 can be real by processor 20
It is existing.
In other words, processor 20 can be further used for identifying the human face region in scene image, be obtained from depth image
Depth information corresponding with human face region is taken, the depth bounds of people's object area is determined according to the depth information of human face region, and
Determine to be connected with human face region according to the depth bounds of people's object area and people's object area for falling into depth bounds is to obtain personage
Area image.
Specifically, the human face region that the deep learning Model Identification trained can be used to go out in scene image first, with
The depth information of human face region is can determine that according to the corresponding relation of scene image and depth image afterwards.Because human face region includes
The features such as nose, eyes, ear, lip, therefore, depth number of each feature corresponding in depth image in human face region
According to being different, for example, in face face depth image acquisition component 12, depth that depth image acquisition component 12 is shot
In image, depth data corresponding to nose may be smaller, and depth data corresponding to ear may be larger.Therefore, above-mentioned people
The depth information in face region may be a numerical value or a number range.Wherein, when the depth information of human face region is one
During individual numerical value, the numerical value can be by averaging to obtain to the depth data of human face region;Or can be by human face region
Depth data take in be worth to.
Because people's object area includes human face region, in other words, people's object area is in some depth together with human face region
In the range of, therefore, after processor 20 determines the depth information of human face region, it can be set according to the depth information of human face region
The depth bounds of people's object area, the depth bounds extraction further according to people's object area fall into the depth bounds and with human face region phase
People's object area of connection is to obtain personage's area image.
In this way, personage's area image can be extracted from scene image according to depth information.Due to obtaining for depth information
The image of the not factor such as illumination, colour temperature in by environment is taken to ring, therefore, the personage's area image extracted is more accurate.
Referring to Fig. 9, in some embodiments, the equipment unlocking method of online game is further comprising the steps of:
S901:Scene image is handled to obtain the whole audience edge image of scene image.
S902:According to whole audience edge image amendment personage's area image.
Referring again to Fig. 2, in some embodiments, S901 and S902 can be by processors 20 and visible image capturing head
11 realize.
In other words, processor 20 can also be used to handle scene image to obtain the whole audience edge image of scene image, with
And according to whole audience edge image amendment personage's area image.
Processor 20 carries out edge extracting to obtain whole audience edge image to scene image first, wherein, whole audience edge graph
Edge lines as in include the edge lines of background object in scene residing for active user and active user.Specifically, may be used
Edge extracting is carried out to scene image by Canny operators.The core that Canny operators carry out the algorithm of edge extracting mainly includes
The following steps:First, convolution is carried out to scene image to eliminate noise with 2D gaussian filterings template;Then, differential operator is utilized
The Grad of the gray scale of each pixel, and the gradient direction of the gray scale according to each pixel of Grad calculating are obtained, passes through gradient
Direction can find adjacent pixels of the respective pixel along gradient direction;Then, each pixel is traveled through, if the gray scale of some pixel
Value is not maximum compared with the gray value of former and later two adjacent pixels on its gradient direction, then it is not side to think this pixel
Edge point.In this way, the pixel that marginal position is in scene image is can determine that, so as to obtain the whole audience edge after edge extracting
Image.
After processor 20 obtains whole audience edge image, personage's area image is modified further according to whole audience edge image.
It is appreciated that personage's area image is will to be connected and fall into all pictures of the depth bounds of setting in scene image with human face region
Obtained after element progress merger, in some scenarios, it is understood that there may be some are connected and fallen into depth bounds with human face region
Object.Therefore, to cause personage's area image of extraction more accurate, whole audience edge graph can be used to carry out personage's area image
Amendment.
Further, processor 20 can also carry out second-order correction to revised personage's area image, for example, can be to amendment
Personage's area image afterwards carries out expansion process, expands personage's area image to retain the edge details of personage's area image.
After processor 20 extracts people's object area of the active user in scene image and obtains personage's area image, you can
Processing is unlocked to equipment based on personage's area image, processing, Neng Gouti are unlocked to equipment based on personage's area image
Rise the security equipped in online game.
It is for instance possible to use some feature extraction algorithms, extract fingerprint, iris, gesture, 3D from personage's area image
The features such as action, and this feature is matched with the feature to prestore, if the match is successful, the equipment of online game is solved
Lock, and if matching is unsuccessful, forbid being unlocked the equipment of online game, and active user can be reminded, it is right
This is not restricted.
In some embodiments, personage's area image of acquisition can be shown on the display screen of electronic installation 1000
Show, can also be printed by the printer being connected with electronic installation 1000.
Also referring to Fig. 3 and Figure 10, embodiment of the present invention also proposes a kind of electronic installation 1000.Electronic installation 1000
Equipment tripper 100 including online game.The equipment tripper 100 of online game can utilize hardware and/or software
Realize.The equipment tripper 100 of online game includes imaging device 10 and processor 20.
Imaging device 10 includes visible image capturing first 11 and depth image acquisition component 12.
Specifically, it is seen that light video camera head 11 includes imaging sensor 111 and lens 112, it is seen that light video camera head 11 can be used for
The colour information of active user is caught to obtain scene image, wherein, imaging sensor 111 includes color filter lens array (such as
Bayer filter arrays), the number of lens 112 can be one or more.Visible image capturing first 11 is obtaining scene image process
In, each imaging pixel in imaging sensor 111 senses luminous intensity and wavelength information in photographed scene, generation one
Group raw image data;Imaging sensor 111 sends this group of raw image data into processor 20, and processor 20 is to original
View data obtains colored scene image after carrying out the computings such as denoising, interpolation.Processor 20 can be in various formats to original
Each image pixel in view data is handled one by one, for example, each image pixel can have the locating depth of 8,10,12 or 14 bits
Degree, processor 20 can be handled each image pixel by identical or different bit depth.
Depth image acquisition component 12 includes structured light projector 121 and structure light video camera head 122, depth image collection group
The depth information that part 12 can be used for catching active user is to obtain depth image.Structured light projector 121 is used to throw structure light
Active user is incident upon, wherein, structured light patterns can be the speckle of laser stripe, Gray code, sine streak or random alignment
Pattern etc..Structure light video camera head 122 includes imaging sensor 1221 and lens 1222, and the number of lens 1222 can be one or more
It is individual.Imaging sensor 1221 is used for the structure light image that capturing structure light projector 121 is projected on active user.Structure light figure
As can be sent by depth acquisition component 12 to processor 20 be demodulated, the processing such as phase recovery, phase information calculate to be to obtain
The depth information of active user.
In some embodiments, it is seen that the function of light video camera head 11 and structure light video camera head 122 can be by a camera
Realize, in other words, imaging device 10 only includes a camera and a structured light projector 121, and above-mentioned camera is not only
Structure light image can also be shot with photographed scene image.
Except using structure light obtain depth image in addition to, can also by binocular vision method, based on differential time of flight (Time
Of Flight, TOF) even depth image acquiring method obtains the depth image of active user.
Processor 20 is further used for being unlocked processing to equipment based on personage's area image.In extraction personage's administrative division map
During picture, the depth information that processor 20 can be combined in depth image extracts personage's administrative division map of two dimension from scene image
Picture, the graphics of personage's area image can also be established according to the depth information in depth image, in conjunction with scene image
Color information carries out color to personage's area image of three-dimensional and filled up to obtain colored personage's area image of three-dimensional.Therefore,
Can be that personage's area image based on two dimension solves to equipment when being unlocked processing to equipment based on personage's area image
Lock is handled or colored personage's area image based on three-dimensional is unlocked processing to equipment.
In addition, the equipment tripper 100 of online game also includes video memory 30.Video memory 30 can be embedded in
In electronic installation 1000 or independently of the memory outside electronic installation 1000, and it may include direct memory access (DMA)
(Direct Memory Access, DMA) feature.The raw image data of first 11 collection of visible image capturing or depth image collection
The structure light image related data that component 12 gathers, which can transmit, to be stored or is cached into video memory 30.Processor 20
Raw image data can be read from video memory 30 to be handled to obtain scene image, also can be from video memory 30
Structure light image related data is read to be handled to obtain depth image.In addition, scene image and depth image can also store
In video memory 30, calling is handled device 20 for processing at any time, for example, processor 20 calls scene image and depth image
Personage's area image extraction of active user is carried out, and processing is unlocked to equipment based on personage's area image.
The equipment tripper 100 of online game may also include display 50.Display 50 can be directly from processor 20
Personage's area image of active user is obtained, personage's area image of active user can be also obtained from video memory 30.It is aobvious
Show that device 50 shows personage's area image of active user to be supplied to active user, or by graphics engine or graphics processor
(Graphics Processing Unit, GPU) is further processed.The equipment tripper 100 of online game also wraps
Include encoder/decoder 60, encoder/decoder 60 can encoding and decoding scene image, depth image and personage's area image etc.
View data, the view data of coding can be stored in video memory 30, and can be shown in personage's area image aobvious
By decoder decompresses to be shown before showing on device 50.Encoder/decoder 60 can be by central processing unit (Central
Processing Unit, CPU), GPU or coprocessor realize.In other words, encoder/decoder 60 can be central processing unit
Any one or more in (Central Processing Unit, CPU), GPU and coprocessor.
The equipment tripper 100 of online game also includes control logic device 40.Imaging device 10 is in imaging, processor
20 can according to imaging device obtain data be analyzed with determine one or more control parameters of imaging device 10 (for example,
Time for exposure etc.) image statistics.Processor 20 sends image statistics to control logic device 40, control logic device
40 control imaging devices 10 are imaged with the control parameter determined.Control logic device 40 may include to perform one or more examples
The processor and/or microcontroller of journey (such as firmware).One or more routines can according to the image statistics of reception determine into
As the control parameter of equipment 10.
Figure 11 is referred to, the electronic installation 1000 of embodiment of the present invention includes one or more processors 200, memory
300 and one or more programs 310.Wherein one or more programs 310 are stored in memory 300, and are configured to
Performed by one or more processors 200.Program 310 includes the online game for performing above-mentioned any one embodiment
Equip the instruction of unlocking method.
For example, program 310 includes being used to perform the instruction of the equipment unlocking method of the online game of following steps:
Obtain the scene image of active user;
Obtain the depth image of active user;
Processing scene image and depth image obtain personage to extract people's object area of the active user in scene image
Area image;
Processing is unlocked to equipment based on personage's area image.
For another example program 310 also includes the instruction for being used to perform the equipment unlocking method of the online game of following steps:
Phase information corresponding to each pixel in demodulation structure light image;
Phase information is converted into depth information;With
Depth image is generated according to depth information.
The computer-readable recording medium of embodiment of the present invention includes being combined with the electronic installation 1000 that can be imaged making
Computer program.Computer program can be performed by processor 200 to complete the trip of the network of above-mentioned any one embodiment
The equipment unlocking method of play.
For example, computer program can be performed by processor 200 to complete the equipment unblock side of the online game of following steps
Method:
Obtain the scene image of active user;
Obtain the depth image of active user;
Processing scene image and depth image obtain personage to extract people's object area of the active user in scene image
Area image;
Processing is unlocked to equipment based on personage's area image.
For another example computer program can be also performed by processor 200 to complete the equipment solution of the online game of following steps
Locking method:
Phase information corresponding to each pixel in demodulation structure light image;
Phase information is converted into depth information;With
Depth image is generated according to depth information.
It should be noted that in the description of the invention, term " first ", " second " etc. are only used for describing purpose, without
It is understood that to indicate or implying relative importance.In addition, in the description of the invention, unless otherwise indicated, the implication of " multiple "
It is two or more.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include
Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize specific logical function or process
Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable
Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned
In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage
Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware
Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal
Discrete logic, have suitable combinational logic gate circuit application specific integrated circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries
Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium
In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also
That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould
Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description
Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not
Necessarily refer to identical embodiment or example.Moreover, specific features, structure, material or the feature of description can be any
One or more embodiments or example in combine in an appropriate manner.
Although embodiments of the invention have been shown and described above, it is to be understood that above-described embodiment is example
Property, it is impossible to limitation of the present invention is interpreted as, one of ordinary skill in the art within the scope of the invention can be to above-mentioned
Embodiment is changed, changed, replacing and modification.
Claims (14)
1. the equipment unlocking method of a kind of online game, it is characterised in that comprise the following steps:
Obtain the scene image of active user;
Obtain the depth image of the active user;
The scene image and the depth image are handled to extract personage area of the active user in the scene image
Domain and obtain personage's area image;
Processing is unlocked to the equipment based on personage's area image.
2. the equipment unlocking method of online game as claimed in claim 1, it is characterised in that described to obtain the active user
Depth image, including:
To active user's projective structure light;
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the depth image.
3. the equipment unlocking method of online game as claimed in claim 2, it is characterised in that the demodulation structure light figure
As each pixel corresponding to phase information to obtain the depth image, including:
Demodulate phase information corresponding to each pixel in the structure light image;
The phase information is converted into depth information;With
The depth image is generated according to the depth information.
4. the equipment unlocking method of online game as claimed in claim 1, it is characterised in that the processing scene image
Personage's area image is obtained to extract people object area of the active user in the scene image with the depth image,
Including:
Identify the human face region in the scene image;
Depth information corresponding with the human face region is obtained from the depth image;
The depth bounds of people's object area is determined according to the depth information of the human face region;With
The people for determining to be connected and fall into the depth bounds with the human face region according to the depth bounds of people's object area
Object area is to obtain personage's area image.
5. the equipment unlocking method of online game as claimed in claim 4, it is characterised in that also include:
The scene image is handled to obtain the whole audience edge image of the scene image;With
According to personage's area image described in the whole audience edge image amendment.
6. the equipment unlocking method of the online game as described in claim any one of 1-5, it is characterised in that described based on described
Personage's area image is unlocked processing to the equipment, including:
Extract characteristic point corresponding to personage's area image;
Judge whether the characteristic point matches with default characteristic point;
If matching, triggering is unlocked processing to the equipment;
If mismatch, it is determined that to the equipment unblock failure.
A kind of 7. equipment tripper of online game, for electronic installation, it is characterised in that the equipment solution of the online game
Locking device includes:
Visible image capturing head, the visible image capturing head are used for the scene image for obtaining active user;
Depth image acquisition component, the depth image acquisition component are used for the depth image for obtaining the active user;With
Processor, the processor are used for:
The scene image and the depth image are handled to extract personage area of the active user in the scene image
Domain and obtain personage's area image;
Processing is unlocked to the equipment based on personage's area image.
8. the equipment tripper of online game as claimed in claim 7, it is characterised in that the depth image acquisition component
Including structured light projector and structure light video camera head, the structured light projector is used for active user's projective structure light;
The structure light video camera head is used for:
The structure light image that shooting is modulated through the active user;With
Phase information corresponding to each pixel of the structure light image is demodulated to obtain the depth image.
9. the equipment tripper of online game as claimed in claim 8, it is characterised in that the structure light video camera head is also used
In:
Demodulate phase information corresponding to each pixel in the structure light image;
The phase information is converted into depth information;With
The depth image is generated according to the depth information.
10. the equipment tripper of online game as claimed in claim 7, it is characterised in that the processor is additionally operable to:
Identify the human face region in the scene image;
Depth information corresponding with the human face region is obtained from the depth image;
The depth bounds of people's object area is determined according to the depth information of the human face region;With
The people for determining to be connected and fall into the depth bounds with the human face region according to the depth bounds of people's object area
Object area is to obtain personage's area image.
11. the equipment tripper of online game as claimed in claim 10, it is characterised in that the processor is additionally operable to:
The scene image is handled to obtain the whole audience edge image of the scene image;With
According to personage's area image described in the whole audience edge image amendment.
12. the equipment tripper of the online game as described in claim any one of 7-11, it is characterised in that the processor
It is additionally operable to:
Extract characteristic point corresponding to personage's area image;
Judge whether the characteristic point matches with default characteristic point;
If matching, triggering is unlocked processing to the equipment;
If mismatch, it is determined that to the equipment unblock failure.
13. a kind of electronic installation, it is characterised in that the electronic installation includes:
One or more processors;
Memory;With
One or more programs, wherein one or more of programs are stored in the memory, and be configured to by
One or more of computing devices, described program include being used for the network trip described in perform claim 1 to 6 any one of requirement
The instruction of the equipment unlocking method of play.
A kind of 14. computer-readable recording medium, it is characterised in that the meter being used in combination including the electronic installation with that can image
Calculation machine program, the computer program can be executed by processor to complete the online game described in claim 1 to 6 any one
Equipment unlocking method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710813516.1A CN107742071A (en) | 2017-09-11 | 2017-09-11 | The equipment unlocking method and electronic installation of online game |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710813516.1A CN107742071A (en) | 2017-09-11 | 2017-09-11 | The equipment unlocking method and electronic installation of online game |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107742071A true CN107742071A (en) | 2018-02-27 |
Family
ID=61235729
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710813516.1A Pending CN107742071A (en) | 2017-09-11 | 2017-09-11 | The equipment unlocking method and electronic installation of online game |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107742071A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108445643A (en) * | 2018-03-12 | 2018-08-24 | 广东欧珀移动通信有限公司 | Project structured light module and its detection method obtain structure and electronic device with device, image |
CN110569632A (en) * | 2018-06-06 | 2019-12-13 | 南昌欧菲生物识别技术有限公司 | unlocking method and electronic device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104243951A (en) * | 2013-06-07 | 2014-12-24 | 索尼电脑娱乐公司 | Image processing device, image processing system and image processing method |
US20150022649A1 (en) * | 2013-07-16 | 2015-01-22 | Texas Instruments Incorporated | Controlling Image Focus in Real-Time Using Gestures and Depth Sensor Data |
CN104584030A (en) * | 2014-11-15 | 2015-04-29 | 深圳市三木通信技术有限公司 | Verification application method and device based on face recognition |
CN106909911A (en) * | 2017-03-09 | 2017-06-30 | 广东欧珀移动通信有限公司 | Image processing method, image processing apparatus and electronic installation |
-
2017
- 2017-09-11 CN CN201710813516.1A patent/CN107742071A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104243951A (en) * | 2013-06-07 | 2014-12-24 | 索尼电脑娱乐公司 | Image processing device, image processing system and image processing method |
US20150022649A1 (en) * | 2013-07-16 | 2015-01-22 | Texas Instruments Incorporated | Controlling Image Focus in Real-Time Using Gestures and Depth Sensor Data |
CN104584030A (en) * | 2014-11-15 | 2015-04-29 | 深圳市三木通信技术有限公司 | Verification application method and device based on face recognition |
CN106909911A (en) * | 2017-03-09 | 2017-06-30 | 广东欧珀移动通信有限公司 | Image processing method, image processing apparatus and electronic installation |
Non-Patent Citations (3)
Title |
---|
张越一: "《基于结构光的快速高精度深度感知》", 《中国科学技术大学》 * |
王梦伟 等: "《基于投影散斑的实时场景深度恢复》", 《计算机辅助设计与图形学报》 * |
王梦伟: "《结构光深度图像获取算法研究》", 《清华大学》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108445643A (en) * | 2018-03-12 | 2018-08-24 | 广东欧珀移动通信有限公司 | Project structured light module and its detection method obtain structure and electronic device with device, image |
CN110569632A (en) * | 2018-06-06 | 2019-12-13 | 南昌欧菲生物识别技术有限公司 | unlocking method and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107742296A (en) | Dynamic image generation method and electronic installation | |
CN107610077A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107509045A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107707831A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107734267A (en) | Image processing method and device | |
CN107807806A (en) | Display parameters method of adjustment, device and electronic installation | |
CN107707835A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107707838A (en) | Image processing method and device | |
CN107509043A (en) | Image processing method and device | |
CN107610078A (en) | Image processing method and device | |
CN107644440A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107610080A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107734264A (en) | Image processing method and device | |
CN107705278A (en) | The adding method and terminal device of dynamic effect | |
CN107742300A (en) | Image processing method, device, electronic installation and computer-readable recording medium | |
CN107610127A (en) | Image processing method, device, electronic installation and computer-readable recording medium | |
CN107610076A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107527335A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107613223A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107705243A (en) | Image processing method and device, electronic installation and computer-readable recording medium | |
CN107613239A (en) | Video communication background display methods and device | |
CN107705277A (en) | Image processing method and device | |
CN107592491A (en) | Video communication background display methods and device | |
CN107613228A (en) | The adding method and terminal device of virtual dress ornament | |
CN107682740A (en) | Composite tone method and electronic installation in video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180227 |