CN109215043A - Image-recognizing method and device, computer readable storage medium - Google Patents

Image-recognizing method and device, computer readable storage medium Download PDF

Info

Publication number
CN109215043A
CN109215043A CN201710524506.6A CN201710524506A CN109215043A CN 109215043 A CN109215043 A CN 109215043A CN 201710524506 A CN201710524506 A CN 201710524506A CN 109215043 A CN109215043 A CN 109215043A
Authority
CN
China
Prior art keywords
main part
depth information
background parts
borderline region
subject
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710524506.6A
Other languages
Chinese (zh)
Inventor
陈朝喜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201710524506.6A priority Critical patent/CN109215043A/en
Publication of CN109215043A publication Critical patent/CN109215043A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure is directed to a kind of image-recognizing method and device, computer readable storage medium, this method may include: the depth information for obtaining subject;Wherein, the subject includes main part and background parts;The borderline region between the main part and the background parts is determined according to the difference of the depth information;The main part and the background parts are identified according to the borderline region determined.By the technical solution of the disclosure, the main part and background parts in image can be identified, to help to promote image processing efficiency to provide good basis for being further processed for image.

Description

Image-recognizing method and device, computer readable storage medium
Technical field
This disclosure relates to technical field of image processing more particularly to a kind of image-recognizing method and device, computer-readable Storage medium.
Background technique
Image recognition technology is widely used in daily life.It proposes to utilize human body and background in the related technology Color difference distinguishes the human body parts and background parts in image.For example, people can be passed through when shooting background is thick grass The colour of skin of body and the difference of thick grass color, distinguish human body and thick grass, to make again to human body parts or thick grass part further Processing.
Summary of the invention
The disclosure provides a kind of image-recognizing method and device, computer readable storage medium, to solve in the related technology Deficiency.
According to the first aspect of the embodiments of the present disclosure, a kind of image-recognizing method is provided, comprising:
Obtain the depth information of subject;Wherein, the subject includes main part and background parts;
The borderline region between the main part and the background parts is determined according to the difference of the depth information;
The main part and the background parts are identified according to the borderline region determined.
Optionally, the depth information is acquired by depth camera and is obtained.
Optionally, the difference according to the depth information determines between the main part and the background parts Borderline region, comprising:
When the difference in arbitrary region between the depth information of pixel unit is more than preset threshold, any area is determined Domain belongs to borderline region.
Optionally, the borderline region that the basis is determined identifies the main part and the background parts, comprising:
It determines and is different from the of the first part in first part and subject that the borderline region surrounds Two parts;
Using the first part as the main part, the second part is as the background parts.
Optionally, further includes:
Recognition of face is carried out to the main part identified, with the identity information of the determination main part.
Optionally, the described pair of main part identified carries out recognition of face, comprising:
The facial characteristics of face is identified according to the depth information of the main part.
According to the second aspect of an embodiment of the present disclosure, a kind of pattern recognition device is provided, comprising:
Acquiring unit obtains the depth information of subject;Wherein, the subject includes main part and back Scape part;
Determination unit determines the side between the main part and the background parts according to the difference of the depth information Battery limit (BL) domain;
First recognition unit identifies the main part and the background parts according to the borderline region determined.
Optionally, the depth information is acquired by depth camera and is obtained.
Optionally, the determination unit includes:
First determines subelement, when the difference in arbitrary region between the depth information of pixel unit is more than preset threshold When, determine that the arbitrary region belongs to borderline region.
Optionally, first recognition unit includes:
Second determines subelement, determines and is different from the first part and subject of the borderline region encirclement The second part of the first part;
Subelement is handled, using the first part as the main part, the second part is as the background portion Point.
Optionally, further includes:
Second recognition unit carries out recognition of face to the main part identified, with the identity of the determination main part Information.
Optionally, second recognition unit includes:
It identifies subelement, the facial characteristics of face is identified according to the depth information of the main part.
According to the third aspect of an embodiment of the present disclosure, a kind of pattern recognition device is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to realizing such as the step of any one of above-described embodiment the method.
According to a fourth aspect of embodiments of the present disclosure, a kind of computer readable storage medium is provided, calculating is stored thereon with Machine instruction is realized when the instruction is executed by processor such as the step of any one of above-described embodiment the method.
The technical scheme provided by this disclosed embodiment can include the following benefits:
As can be seen from the above embodiments, the disclosure, can be with according to the difference of depth information between shooting main body and shooting background Main part and background parts are identified, so as to (for example recognition of face, scratch figure, U.S. face to be for further processing to image Deng) basis is provided, improve the efficiency of image procossing.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure Example, and together with specification for explaining the principles of this disclosure.
Fig. 1 is a kind of flow chart of image-recognizing method shown according to an exemplary embodiment.
Fig. 2-3 is the schematic illustration of TOF camera shown according to an exemplary embodiment.
Fig. 4 is the flow chart of another image-recognizing method shown according to an exemplary embodiment.
Fig. 5 is the schematic diagram that TOF camera 20 shown according to an exemplary embodiment shoots object 30.
Fig. 6 is the signal shown according to an exemplary embodiment that main part and background parts are identified according to borderline region Figure.
Fig. 7 is a kind of block diagram of pattern recognition device shown according to an exemplary embodiment.
Fig. 8 is the block diagram of another pattern recognition device shown according to an exemplary embodiment.
Fig. 9 is the block diagram of another pattern recognition device shown according to an exemplary embodiment.
Figure 10 is the block diagram of another pattern recognition device shown according to an exemplary embodiment.
Figure 11 is the block diagram of another pattern recognition device shown according to an exemplary embodiment.
Figure 12 is a kind of structural schematic diagram for pattern recognition device shown according to an exemplary embodiment.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all embodiments consistent with the application.On the contrary, they be only with it is such as appended The example of the consistent device and method of some aspects be described in detail in claims, the application.
It is only to be not intended to be limiting the application merely for for the purpose of describing particular embodiments in term used in this application. It is also intended in the application and the "an" of singular used in the attached claims, " described " and "the" including majority Form, unless the context clearly indicates other meaning.It is also understood that term "and/or" used herein refers to and wraps It may be combined containing one or more associated any or all of project listed.
It will be appreciated that though various information, but this may be described using term first, second, third, etc. in the application A little information should not necessarily be limited by these terms.These terms are only used to for same type of information being distinguished from each other out.For example, not departing from In the case where the application range, the first information can also be referred to as the second information, and similarly, the second information can also be referred to as One information.Depending on context, word as used in this " if " can be construed to " ... when " or " when ... When " or " in response to determination ".
Fig. 1 is a kind of flow chart of image-recognizing method shown according to an exemplary embodiment, as shown in Figure 1, the party Method is applied in electronic equipment, may comprise steps of:
In a step 102, the depth information of subject is obtained.
In the present embodiment, the subject includes main part and background parts;The depth information can be by depth It spends camera acquisition to obtain, for example is taken the photograph using binocular RGB, structure light, TOF (Time Of Flight, flight time) technology As head.
By taking TOF technology as an example, as Figure 2-3, TOF camera 20 emits optical signal (transmitting signal) to subject 10 And receive the optical signal (return signal) of return.Pass through the phase difference between transmitting signal and corresponding return signalIt can count Calculation obtains the timeWherein, f is the frequency of optical signal.TOF camera 20 can be obtained further according to the speed c of optical signal The distance between subject 10
At step 104, it is determined between the main part and the background parts according to the difference of the depth information Borderline region.
In the present embodiment, since the distance between main part and background parts are often larger, (for example main part is Human body parts, background parts are mountain, then human body parts with a distance from depth camera much smaller than mountain from depth camera away from From), when the difference in arbitrary region between the depth information of pixel unit is more than preset threshold, it can determine any area Domain belongs to borderline region.Wherein, the borderline region can be main part and the pixel unit of background parts intersection is formed Line segment, be also possible to corresponding to the line segment pixel unit and pixel unit of preset quantity is formed by area around it Domain, the disclosure are limited not to this.
In step 106, the main part and the background parts are identified according to the borderline region determined.
In the present embodiment, based on the above-mentioned determination to borderline region, based on the part within the borderline region Part, the part except the borderline region are background parts.Hence, it can be determined that first that the borderline region surrounds Point and subject in be different from the second part of the first part, and using the first part as the main body Part, the second part is as the background parts.Determine the borderline region in shooting image through the above way, it can be with Realize the identification to main part and background parts.
In the present embodiment, recognition of face further can be carried out to the main part identified, with the determination main body Partial identity information.For example, the structure due to position each in face is different, i.e., its depth information has differences, Ke Yigen According to the facial characteristics of the depth information identification face of the main part, to determine the identity letter of the main part currently identified Breath.
As can be seen from the above embodiments, the disclosure, can be with according to the difference of depth information between shooting main body and shooting background Main part and background parts are identified, so as to (for example recognition of face, scratch figure, U.S. face to be for further processing to image Deng) basis is provided, improve the efficiency of image procossing.
In order to make it easy to understand, the technical solution of the disclosure is further described below with reference to concrete scene and attached drawing.
Fig. 4 is the flow chart of another image-recognizing method shown according to an exemplary embodiment, as shown in figure 4, should Method is applied in electronic equipment, may comprise steps of:
In step 402, the depth information of subject is obtained.
In the present embodiment, the subject in image includes main part and background parts.Wherein, the depth of image Information can be acquired by depth camera and be obtained, such as using binocular RGB, the camera of structure light, TOF technology.
By taking TOF camera as an example, as shown in figure 5, TOF camera 20 shoots object 30.It can be according to preset order successively to object The each point transmitting optical signal of body 30 to measure the depth information of respective point, such as can according in figure from left to right, from top to bottom Sequence successively measure.Meanwhile object 30 can be taken multiple measurements to obtain depth information of the multiple groups about object 30, then right It, which is weighted and averaged, is calculated final depth information;Wherein, weight can flexible setting according to the actual situation, the disclosure It is limited not to this.By taking multiple measurements to object 30 according to preset order, the information that fathoms can be improved The 3-D effect of accuracy rate and image, to further increase subsequent identification main body, the accuracy rate of background parts.
In step 404, calculate separately in image that depth is believed between pixel unit in each pixel unit and preset range The difference of breath.
In the present embodiment, preset range can be each a certain number of pixel lists on the direction up and down of pixel unit Member or any other range, the disclosure are limited not to this.
In a step 406, the borderline region between main part and background parts is determined.
In the present embodiment, since the distance between main part and background parts are often larger, (for example main part is Human body parts, background parts are mountain, then human body parts with a distance from depth camera much smaller than mountain from depth camera away from From), when the difference in arbitrary region between the depth information of pixel unit is more than preset threshold, it can determine the arbitrary region Belong to borderline region.Wherein, borderline region can be the line segment of the pixel unit formation of main part and background parts intersection, It is also possible to the pixel unit corresponding to the line segment and pixel unit of preset quantity is formed by region, the disclosure around it It is limited not to this.
In a step 408, according to the borderline region identification main part and background parts determined.
In the present embodiment, based in step 406 to the determination of borderline region, based on the part within the borderline region Part, the part except borderline region are background parts.Hence, it can be determined that the first part that borderline region surrounds, and The second part of first part is different from subject, and using the first part as main part, which makees For background parts.Through the above way come determine shooting image in borderline region, may be implemented to main part and background portion The identification divided.
For example, as shown in fig. 6, including human body parts 40, mountain portions 50, water surface part 60 in the image of shooting. The distance between human body parts 40 and mountain portions 50, human body parts 40 and the distance between water surface part 60 are larger, and mountain range The distance between part 50 and water surface part 60 are smaller;Therefore, when the boundary of human body parts 40 depth information and the surrounding water surface, Difference between the depth information on mountain range is more than that preset threshold (can flexibly be set, such as according to photographed scene according to the actual situation Difference corresponding threshold value is set separately) when, can determine the boundary be for human body parts 40 and other regions frontier district Domain, to further determine that first part (the i.e. part of the boundary encirclement of human body parts 40 that the borderline region surrounds;Wherein, The part may include the part that the boundary and whole image edge surround jointly) it is main body portion, and this is different from image The second part (i.e. mountain portions 50 and water surface part 60) of first part is background parts.
In step 410, recognition of face is carried out to the main part identified, to determine the identity letter of the main part Breath.
In the present embodiment, since the structure at position each in face is different, i.e., its depth information has differences, Ke Yigen According to the facial characteristics of the depth information identification face of main part, to determine the identity information of the main part currently identified.Than Such as, position, profile, the structure etc. at each position of face can be judged according to the depth information of main part.For example, ear portion Each pixel unit differs smaller between depth information due to belonging to ear position in position;Similarly, each pixel in eyes It is also differed between the depth information of unit smaller.Simultaneously as ear is apart from each other with eyes, and eyes are located at the front of ear (it is assumed that camera is located in front of face using facial orientation as positive direction), corresponding between ear and the pixel unit of eyes Depth then differs larger, and the depth information of eyes is less than the depth information of ear.It therefore, can be according to above-mentioned ear and eyes Depth information between difference, identify ear and eyes respectively.And the characteristics of depth information at other positions, is similar, Details are not described herein.
To sum up, the disclosure can identify main part according to the difference of depth information between shooting main body and shooting background Point and background parts, so as to for be for further processing to image (such as recognition of face, scratch figure, U.S. face etc.) provide it is basic, Improve the efficiency of image procossing.
Corresponding with the embodiment of image-recognizing method above-mentioned, the disclosure additionally provides the implementation of pattern recognition device Example.
Fig. 7 is a kind of block diagram of pattern recognition device shown according to an exemplary embodiment.Referring to Fig. 7, the device packet Include acquiring unit 71, determination unit 72 and the first recognition unit 73.
The acquiring unit 71 is configured as obtaining the depth information of subject;Wherein, the subject includes Main part and background parts;
The determination unit 72 is configured as determining the main part and the background according to the difference of the depth information Borderline region between part;
First recognition unit 73 is configured as identifying the main part and the back according to the borderline region determined Scape part.
Optionally, the depth information is acquired by depth camera and is obtained.
As shown in figure 8, Fig. 8 is the block diagram of another pattern recognition device shown according to an exemplary embodiment, the reality Example is applied on the basis of aforementioned embodiment illustrated in fig. 7, determination unit 72 may include: the first determining subelement 721.
First difference for determining that subelement 721 is configured as working as in arbitrary region between the depth information of pixel unit surpasses When crossing preset threshold, determine that the arbitrary region belongs to borderline region.
As shown in figure 9, Fig. 9 is the block diagram of another pattern recognition device shown according to an exemplary embodiment, the reality Example is applied on the basis of aforementioned embodiment illustrated in fig. 7, the first recognition unit 73 may include: the second determining subelement 722 and place Manage subelement 723.
The second determining subelement 722 is configured to determine that the first part that the borderline region surrounds, and is taken The second part of the first part is different from object;
The processing subelement 723 is configured as using the first part as the main part, and the second part is made For the background parts.
It should be noted that second in above-mentioned Installation practice shown in Fig. 9 determines subelement 722 and processing subelement 723 structure also may be embodied in the Installation practice of earlier figures 8, be not limited to this disclosure.
As shown in Figure 10, Figure 10 is the block diagram of another pattern recognition device shown according to an exemplary embodiment, should Embodiment can also include: the second recognition unit 74 on the basis of aforementioned embodiment illustrated in fig. 7.
Second recognition unit 74 is configured as carrying out recognition of face to the main part identified, with the determination main body Partial identity information.
As shown in figure 11, Figure 11 is the block diagram of another pattern recognition device shown according to an exemplary embodiment, should For embodiment on the basis of aforementioned embodiment illustrated in fig. 10, the second recognition unit 74 may include: identification subelement 741.
The identification subelement 741 is configured as identifying the facial characteristics of face according to the depth information of the main part.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, no detailed explanation will be given here.
For device embodiment, since it corresponds essentially to embodiment of the method, so related place is referring to method reality Apply the part explanation of example.The apparatus embodiments described above are merely exemplary, wherein described be used as separation unit The unit of explanation may or may not be physically separated, and component shown as a unit can be or can also be with It is not physical unit, it can it is in one place, or may be distributed over multiple network units.It can be according to actual The purpose for needing to select some or all of the modules therein to realize disclosure scheme.Those of ordinary skill in the art are not paying Out in the case where creative work, it can understand and implement.
Correspondingly, the disclosure also provides a kind of pattern recognition device, comprising: processor;It is executable for storage processor The memory of instruction;Wherein, the processor is configured to: obtain the depth information of subject;Wherein, described to be taken Object includes main part and background parts;The main part and the background portion are determined according to the difference of the depth information / borderline region;The main part and the background parts are identified according to the borderline region determined.
Correspondingly, the disclosure also provides a kind of terminal, the terminal include memory and one or more than one Program, one of them perhaps more than one program be stored in memory and be configured to by one or more than one It includes the instruction for performing the following operation that reason device, which executes the one or more programs: obtaining the depth of subject Spend information;Wherein, the subject includes main part and background parts;Institute is determined according to the difference of the depth information State the borderline region between main part and the background parts;According to the borderline region determined identify the main part with The background parts.
Figure 12 is a kind of block diagram for pattern recognition device 1200 shown according to an exemplary embodiment.For example, dress Setting 1200 can be mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, doctor Treat equipment, body-building equipment, personal digital assistant etc..
Referring to Fig.1 2, device 1200 may include following one or more components: processing component 1202, memory 1204, Power supply module 1206, multimedia component 1208, audio component 1210, the interface 1212 of input/output (I/O), sensor module 1214 and communication component 1216.
The integrated operation of the usual control device 1200 of processing component 1202, such as with display, telephone call, data communication, Camera operation and record operate associated operation.Processing component 1202 may include one or more processors 1220 to execute Instruction, to perform all or part of the steps of the methods described above.In addition, processing component 1202 may include one or more moulds Block, convenient for the interaction between processing component 1202 and other assemblies.For example, processing component 1202 may include multi-media module, To facilitate the interaction between multimedia component 1208 and processing component 1202.
Memory 1204 is configured as storing various types of data to support the operation in device 1200.These data Example includes the instruction of any application or method for operating on device 1200, contact data, telephone book data, Message, picture, video etc..Memory 1204 can by any kind of volatibility or non-volatile memory device or they Combination is realized, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), it is erasable can Program read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory Reservoir, disk or CD.
Power supply module 1206 provides electric power for the various assemblies of device 1200.Power supply module 1206 may include power management System, one or more power supplys and other with for device 1200 generate, manage, and distribute the associated component of electric power.
Multimedia component 1208 includes the screen of one output interface of offer between described device 1200 and user.? In some embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, Screen may be implemented as touch screen, to receive input signal from the user.Touch panel includes that one or more touch passes Sensor is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding is dynamic The boundary of work, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more Media component 1208 includes a front camera and/or rear camera.When device 1200 is in operation mode, as shot mould When formula or video mode, front camera and/or rear camera can receive external multi-medium data.Each preposition camera shooting Head and rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 1210 is configured as output and/or input audio signal.For example, audio component 1210 includes a wheat Gram wind (MIC), when device 1200 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone quilt It is configured to receive external audio signal.The received audio signal can be further stored in memory 1204 or via communication Component 1216 is sent.In some embodiments, audio component 1210 further includes a loudspeaker, is used for output audio signal.
I/O interface 1212 provides interface, above-mentioned peripheral interface module between processing component 1202 and peripheral interface module It can be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and Locking press button.
Sensor module 1214 includes one or more sensors, and the state for providing various aspects for device 1200 is commented Estimate.For example, sensor module 1214 can detecte the state that opens/closes of device 1200, the relative positioning of component, such as institute The display and keypad that component is device 1200 are stated, sensor module 1214 can be with detection device 1200 or device 1,200 1 The position change of a component, the existence or non-existence that user contacts with device 1200,1200 orientation of device or acceleration/deceleration and dress Set 1200 temperature change.Sensor module 1214 may include proximity sensor, be configured in not any physics It is detected the presence of nearby objects when contact.Sensor module 1214 can also include optical sensor, as CMOS or ccd image are sensed Device, for being used in imaging applications.In some embodiments, which can also include acceleration sensing Device, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 1216 is configured to facilitate the communication of wired or wireless way between device 1200 and other equipment.Dress The wireless network based on communication standard, such as WiFi can be accessed by setting 1200,2G or 3G or their combination.It is exemplary at one In embodiment, communication component 1216 receives broadcast singal or broadcast correlation from external broadcasting management system via broadcast channel Information.In one exemplary embodiment, the communication component 1216 further includes near-field communication (NFC) module, to promote short distance Communication.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band can be based in NFC module (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 1200 can be by one or more application specific integrated circuit (ASIC), number Signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided It such as include the memory 1204 of instruction, above-metioned instruction can be executed by the processor 1220 of device 1200 to complete the above method.Example Such as, the non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, soft Disk and optical data storage devices etc..
Those skilled in the art will readily occur to its of the disclosure after considering specification and practicing disclosure disclosed herein Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or Person's adaptive change follows the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following Claim is pointed out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the accompanying claims.

Claims (14)

1. a kind of image-recognizing method characterized by comprising
Obtain the depth information of subject;Wherein, the subject includes main part and background parts;
The borderline region between the main part and the background parts is determined according to the difference of the depth information;
The main part and the background parts are identified according to the borderline region determined.
2. being obtained the method according to claim 1, wherein the depth information is acquired by depth camera.
3. the method according to claim 1, wherein the difference according to the depth information determines the master Borderline region between body portion and the background parts, comprising:
When the difference in arbitrary region between the depth information of pixel unit is more than preset threshold, the arbitrary region category is determined In borderline region.
4. the method according to claim 1, wherein the borderline region that the basis is determined identifies the main body Part and the background parts, comprising:
Determine second that the first part is different from the first part and subject of the borderline region encirclement Point;
Using the first part as the main part, the second part is as the background parts.
5. the method according to claim 1, wherein further include:
Recognition of face is carried out to the main part identified, with the identity information of the determination main part.
6. according to the method described in claim 5, it is characterized in that, the described pair of main part that identifies carries out recognition of face, Include:
The facial characteristics of face is identified according to the depth information of the main part.
7. a kind of pattern recognition device characterized by comprising
Acquiring unit obtains the depth information of subject;Wherein, the subject includes main part and background portion Point;
Determination unit determines the frontier district between the main part and the background parts according to the difference of the depth information Domain;
First recognition unit identifies the main part and the background parts according to the borderline region determined.
8. device according to claim 7, which is characterized in that the depth information is acquired by depth camera and obtained.
9. device according to claim 7, which is characterized in that the determination unit includes:
First determines subelement, when the difference in arbitrary region between the depth information of pixel unit is more than preset threshold, really The fixed arbitrary region belongs to borderline region.
10. device according to claim 7, which is characterized in that first recognition unit includes:
Second determines subelement, determines described in being different from the first part and subject of the borderline region encirclement The second part of first part;
Subelement is handled, using the first part as the main part, the second part is as the background parts.
11. device according to claim 7, which is characterized in that further include:
Second recognition unit carries out recognition of face to the main part identified, with the identity information of the determination main part.
12. device according to claim 11, which is characterized in that second recognition unit includes:
It identifies subelement, the facial characteristics of face is identified according to the depth information of the main part.
13. a kind of pattern recognition device characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to realizing such as the step of any one of claim 1-6 the method.
14. a kind of computer readable storage medium, is stored thereon with computer instruction, which is characterized in that the instruction is by processor It is realized when execution such as the step of any one of claim 1-6 the method.
CN201710524506.6A 2017-06-30 2017-06-30 Image-recognizing method and device, computer readable storage medium Pending CN109215043A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710524506.6A CN109215043A (en) 2017-06-30 2017-06-30 Image-recognizing method and device, computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710524506.6A CN109215043A (en) 2017-06-30 2017-06-30 Image-recognizing method and device, computer readable storage medium

Publications (1)

Publication Number Publication Date
CN109215043A true CN109215043A (en) 2019-01-15

Family

ID=64961149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710524506.6A Pending CN109215043A (en) 2017-06-30 2017-06-30 Image-recognizing method and device, computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109215043A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110099217A (en) * 2019-05-31 2019-08-06 努比亚技术有限公司 A kind of image capturing method based on TOF technology, mobile terminal and computer readable storage medium
CN111726531A (en) * 2020-06-29 2020-09-29 北京小米移动软件有限公司 Image shooting method, processing method, device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160328601A1 (en) * 2014-04-25 2016-11-10 Tencent Technology (Shenzhen) Company Limited Three-dimensional facial recognition method and system
CN106295640A (en) * 2016-08-01 2017-01-04 乐视控股(北京)有限公司 The object identification method of a kind of intelligent terminal and device
CN106469446A (en) * 2015-08-21 2017-03-01 小米科技有限责任公司 The dividing method of depth image and segmenting device
CN106534590A (en) * 2016-12-27 2017-03-22 努比亚技术有限公司 Photo processing method and apparatus, and terminal
CN106648063A (en) * 2016-10-19 2017-05-10 北京小米移动软件有限公司 Gesture recognition method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160328601A1 (en) * 2014-04-25 2016-11-10 Tencent Technology (Shenzhen) Company Limited Three-dimensional facial recognition method and system
CN106469446A (en) * 2015-08-21 2017-03-01 小米科技有限责任公司 The dividing method of depth image and segmenting device
CN106295640A (en) * 2016-08-01 2017-01-04 乐视控股(北京)有限公司 The object identification method of a kind of intelligent terminal and device
CN106648063A (en) * 2016-10-19 2017-05-10 北京小米移动软件有限公司 Gesture recognition method and device
CN106534590A (en) * 2016-12-27 2017-03-22 努比亚技术有限公司 Photo processing method and apparatus, and terminal

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
宓超等: "《装卸机器视觉及其应用》", 31 January 2016, 上海:上海科学技术出版社 *
徐进军编: "《工业测量技术与数据处理》", 28 February 2014, 武汉:武汉大学出版社 *
陈明哲编著: "《机器人控制》", 31 March 1989, 北京航空航天大学出版社 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110099217A (en) * 2019-05-31 2019-08-06 努比亚技术有限公司 A kind of image capturing method based on TOF technology, mobile terminal and computer readable storage medium
CN111726531A (en) * 2020-06-29 2020-09-29 北京小米移动软件有限公司 Image shooting method, processing method, device, electronic equipment and storage medium
CN111726531B (en) * 2020-06-29 2022-03-01 北京小米移动软件有限公司 Image shooting method, processing method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN104850828B (en) Character recognition method and device
CN105631403B (en) Face identification method and device
KR101649596B1 (en) Method, apparatus, program, and recording medium for skin color adjustment
CN106454336B (en) The method and device and terminal that detection terminal camera is blocked
CN104408402B (en) Face identification method and device
JP2017532922A (en) Image photographing method and apparatus
CN105554389B (en) Shooting method and device
CN107958223B (en) Face recognition method and device, mobile equipment and computer readable storage medium
CN107688781A (en) Face identification method and device
CN105117111B (en) The rendering method and device of virtual reality interactive picture
CN108848313B (en) Multi-person photographing method, terminal and storage medium
CN104933419B (en) The method, apparatus and red film for obtaining iris image identify equipment
CN108363982B (en) Method and device for determining number of objects
CN106600530B (en) Picture synthesis method and device
CN111917980B (en) Photographing control method and device, storage medium and electronic equipment
CN105100634B (en) Image capturing method and device
CN104408404A (en) Face identification method and apparatus
CN105512615B (en) Image processing method and device
CN105208284B (en) Shoot based reminding method and device
CN109726614A (en) 3D stereoscopic imaging method and device, readable storage medium storing program for executing, electronic equipment
CN108154466A (en) Image processing method and device
CN106774849B (en) Virtual reality equipment control method and device
CN104573642B (en) Face identification method and device
CN104702848B (en) Show the method and device of framing information
CN109215043A (en) Image-recognizing method and device, computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190115

RJ01 Rejection of invention patent application after publication