CN108876835A - Depth information detection method, device and system and storage medium - Google Patents
Depth information detection method, device and system and storage medium Download PDFInfo
- Publication number
- CN108876835A CN108876835A CN201810262054.3A CN201810262054A CN108876835A CN 108876835 A CN108876835 A CN 108876835A CN 201810262054 A CN201810262054 A CN 201810262054A CN 108876835 A CN108876835 A CN 108876835A
- Authority
- CN
- China
- Prior art keywords
- images
- depth information
- target
- measured
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Abstract
The embodiment of the present invention provides a kind of depth information detection method, device and system and storage medium.This method includes:Obtain two images acquired respectively by two image collecting devices;Target detection is carried out to two images respectively, the target object for including with each image in determining two images;The target object for including in two images is matched, to determine the same object to be measured object to match in two images;And the location information based on object to be measured object in both images, determine the depth information of object to be measured object.Depth information detection method, device and system and storage medium according to an embodiment of the present invention carry out object matching based on content understanding, can be improved object matching correctness under circumstances.The above method does not require the imaging picture of two cameras close to unanimously, this is conducive to the increase for detecting depth bounds.It is matched by being then based on content, allows two cameras using the big spacing installation of any angle.
Description
Technical field
The present invention relates to field of image processing, relate more specifically to a kind of depth information detection method, device and system with
And storage medium.
Background technique
Sky in protection and monitor field, other than needing to identify the piece identity in acquired image, between more people
Between relation information (than such as whether side by side colleague or front and back close to etc.) it is also critically important.This requires the phases of video camera (or camera)
It closes image processing system to be not only able to detect face, also can determine the spatial positional information of each face.Due to video camera
Imaging results be a flat image, so in order to obtain the spatial positional information of face, it is necessary to the method for wanting calculate face with
Distance (i.e. depth) information of video camera.Accordingly, it is desirable to provide a kind of depth information detection method.
Summary of the invention
The present invention is proposed in view of the above problem.The present invention provides a kind of depth information detection method, device and
System and storage medium.
According to an aspect of the present invention, a kind of depth information detection method is provided.This method includes:It obtains by two images
Two images that acquisition device acquires respectively;Target detection is carried out to two images respectively, to determine each of two images
The target object that image includes;The target object for including in two images is matched, is matched with determining in two images
Same object to be measured object;And the location information based on object to be measured object in both images, determine object to be measured pair
The depth information of elephant.
Illustratively, in the location information based on object to be measured object in both images, object to be measured object is determined
Before depth information, method further includes:Respectively in two images object to be measured object carry out critical point detection, with determine to
Survey the position of one or more key points of target object in both images;In both images based on object to be measured object
Location information determines that the depth information of object to be measured object includes:The position of key point in both images based on one or more
It sets, determines the depth information of object to be measured object.
Illustratively, the position of key point in both images based on one or more, determines the depth of object to be measured object
Spending information includes:For each of one or more key points, position based on the key point in both images is determined
Parallax of the key point in two image collecting devices;For each of one or more key points, it is based on the key
The depth information of the disparity computation key point o'clock in two image collecting devices;And key point based on one or more
Depth information determines the depth information of object to be measured object.
Illustratively, the number of one or more key points is one, based on one or more the depth information of key point
The depth information for determining object to be measured object includes:The depth information for determining key point is the depth information of object to be measured object.
Illustratively, the number of one or more key points is multiple, based on one or more the depth information of key point
The depth information for determining object to be measured object includes:In conjunction with the depth information of multiple key points, believed with the depth after being combined
Breath;Determine that the depth information after combining is the depth information of object to be measured object.
Illustratively, for each of one or more key points, based on the key point in two image collectors
The depth information of the disparity computation key point in setting includes:
The depth information of each key point in one or more key points is calculated based on following formula:
Wherein, ZiFor the depth information of i-th of key point, f is the focal length of two image collecting devices, and B is two images
The centre distance of acquisition device, XRiAnd XTiImaging point of respectively i-th of the key point on two images is to the pre- of correspondence image
The distance of deckle edge, wherein XRi-XTiFor parallax of i-th of key point in two image collecting devices.
Illustratively, the target object for including in two images is matched, with what is matched in determining two images
Same object to be measured object includes:Calculate the position of any two target object that two images separately include in respective image
Between difference, if difference be less than preset threshold, it is determined that two target objects are same target objects;Include from two images
Target object in select the object to be measured object to match.
Illustratively, the target object for including in two images is matched, with what is matched in determining two images
Same object to be measured object includes:The target object for including in two images is matched using object matching network, to obtain
The object matching information of same target object obtaining the output of object matching network, being used to indicate in two images;Based on object
Match information selects the object to be measured object to match from the target object that two images include.
According to a further aspect of the invention, a kind of depth information detection device is provided, including:Image collection module is used for
Obtain two images acquired respectively by two image collecting devices;Module of target detection, for being carried out respectively to two images
Target detection, the target object for including with each image in determining two images;Object matching module is used for two images
In include target object matched, to determine the same object to be measured object that matches in two images;And depth is true
Cover half block determines the depth information of object to be measured object for the location information based on object to be measured object in both images.
According to a further aspect of the invention, a kind of depth information detection system, including processor and memory are provided,
In, computer program instructions are stored in the memory, the computer program instructions are used for when being run by the processor
Execute above-mentioned depth information detection method.
Illustratively, depth information detection system includes camera, and camera includes two imaging sensors, two image sensings
Device is two image collecting devices.
According to a further aspect of the invention, a kind of storage medium is provided, stores program instruction on said storage,
Described program instruction is at runtime for executing above-mentioned depth information detection method.
Depth information detection method, device and system and storage medium according to an embodiment of the present invention, first to image
The content (such as face) for actually including is detected, and is matched based on testing result to target object, then based on detection
Target object out determines the depth information of object to be measured object.Such process simulates the process of human eye depth perception substantially,
It can be improved object matching correctness under circumstances.Simultaneously because the object matching based on content understanding, thus simultaneously
Do not require the imaging picture of two cameras close consistent, so even if expanding bigger by the spacing of two cameras, not yet
The depth information of target object is calculated in influence, is conducive to the increase for detecting depth bounds in this way.Further, due to being
It is matched based on content, two cameras even can also break the limitation being mounted side by side, between any angle can be used big
Away from installation.As long as the space geometry relationship of two cameras installation and towards relationship it is known that obtaining same target object again two
Position in the image of a camera acquisition, it can the depth information of this target object is found out according to solid geometry relationship.
Detailed description of the invention
The embodiment of the present invention is described in more detail in conjunction with the accompanying drawings, the above and other purposes of the present invention,
Feature and advantage will be apparent.Attached drawing is used to provide to further understand the embodiment of the present invention, and constitutes explanation
A part of book, is used to explain the present invention together with the embodiment of the present invention, is not construed as limiting the invention.In the accompanying drawings,
Identical reference label typically represents same parts or step.
Fig. 1 shows for realizing the exemplary electronic device of depth information detection method and device according to an embodiment of the present invention
Schematic block diagram;
Fig. 2 shows the schematic flow charts of depth information detection method according to an embodiment of the invention;
Fig. 3 shows the schematic flow chart of depth information detection method in accordance with another embodiment of the present invention;
Fig. 4 shows the schematic illustration of binocular ranging;
Fig. 5 shows the schematic block diagram of depth information detection device according to an embodiment of the invention;And
Fig. 6 shows the schematic block diagram of depth information detection system according to an embodiment of the invention.
Specific embodiment
In order to enable the object, technical solutions and advantages of the present invention become apparent, root is described in detail below with reference to accompanying drawings
According to example embodiments of the present invention.Obviously, described embodiment is only a part of the embodiments of the present invention, rather than this hair
Bright whole embodiments, it should be appreciated that the present invention is not limited by example embodiment described herein.Based on described in the present invention
The embodiment of the present invention, those skilled in the art's obtained all other embodiment in the case where not making the creative labor
It should all fall under the scope of the present invention.
Traditional depth camera scheme has laser radar scheme, structure light scheme and binocular camera scheme etc..Swash
The schemes such as optical radar and structure light belong to isomery scheme, need to increase a set of isomery in original intelligent camera head system hard
Part compares and is difficult to integrate.Binocular camera scheme only needs to increase on hardware a camera (or image sensing side by side
Device), it is convenient to extend on intelligent camera head apparatus and realize.
However, since traditional binocular camera can not understand the content of image, thus can not really identify with a pair
As the specific location being respectively imaged in two cameras.Therefore traditional binocular camera generally uses conventional digital image to handle
Method, such as small range region correlation function matching the methods of, to infer the imaging position of same target in two images.
The characteristics of these conventional methods are affected by environment very big in practical applications, depend entirely on subject itself
It is whether distinct, it can fail to surface color and the unconspicuous object of textural characteristics.The matched precision of these conventional methods and correct
Property it is difficult to ensure that, for example, if some region in an image is similar to another field color in another image, texture,
Even if comprising entirely different content, (one or both therein possibly even and does not include pair of desired classification in two regions
As, such as face), it is also easy to be erroneously interpreted as same target.In addition, these conventional methods are often all based on adopting for two images
Collect the similar hypothesis of environment, it is desirable that the content (i.e. two pictures) that two images include is almost the same, which dictates that two camera shootings
The spacing of head cannot be too big, to limit its range for detecting depth.
The embodiment of the invention provides a kind of depth information detection method, device and system and storage mediums.According to this
Inventive embodiments are improved the method for how obtaining imaging position of the same target in two cameras, are directly made
With the object matched in two images based on the mode of content understanding, the position of same target in both images is determined.Root
It can be applied to any required field identified to object according to the depth information detection method and device of the embodiment of the present invention,
Such as recognition of face etc..
Firstly, describing referring to Fig.1 for realizing depth information detection method according to an embodiment of the present invention and device
Exemplary electronic device 100.
As shown in Figure 1, electronic equipment 100 includes one or more processors 102, one or more storage devices 104.It can
Selection of land, electronic equipment 100 can also include input unit 106, output device 108 and image collecting device 110, these groups
Part passes through the interconnection of bindiny mechanism's (not shown) of bus system 112 and/or other forms.It should be noted that electronics shown in FIG. 1 is set
Standby 100 component and structure be it is illustrative, and not restrictive, as needed, the electronic equipment also can have it
His component and structure.
The processor 102 can use digital signal processor (DSP), field programmable gate array (FPGA), can compile
At least one of journey logic array (PLA), microprocessor example, in hardware realizes that the processor 102 can be centre
It manages unit (CPU), image processor (GPU), dedicated integrated circuit (ASIC) or there is data-handling capacity and/or instruction
The combination of one or more of the processing unit of other forms of executive capability, and can control the electronic equipment 100
In other components to execute desired function.
The storage device 104 may include one or more computer program products, and the computer program product can
To include various forms of computer readable storage mediums, such as volatile memory and/or nonvolatile memory.It is described easy
The property lost memory for example may include random access memory (RAM) and/or cache memory (cache) etc..It is described non-
Volatile memory for example may include read-only memory (ROM), hard disk, flash memory etc..In the computer readable storage medium
On can store one or more computer program instructions, processor 102 can run described program instruction, to realize hereafter institute
The client functionality (realized by processor) in the embodiment of the present invention stated and/or other desired functions.In the meter
Can also store various application programs and various data in calculation machine readable storage medium storing program for executing, for example, the application program use and/or
The various data etc. generated.
The input unit 106 can be the device that user is used to input instruction, and may include keyboard, mouse, wheat
One or more of gram wind and touch screen etc..
The output device 108 can export various information (such as image and/or sound) to external (such as user), and
It and may include one or more of display, loudspeaker etc..Optionally, the input unit 106 and the output device
108 can integrate together, be realized using same interactive device (such as touch screen).
Described image acquisition device 110 can acquire image, and acquired image is stored in the storage device
For the use of other components in 104.Image collecting device 110 can be camera or imaging sensor.Image collecting device 110
Number can be two, the electronic equipment 100 comprising two image collecting devices 110 can be binocular camera.It should manage
Solution, image collecting device 110 is only example, and electronic equipment 100 can not include image collecting device 110.In this case,
It can use other device acquisition images with Image Acquisition ability, and the image of acquisition be sent to electronic equipment 100.
Illustratively, the exemplary electron for realizing depth information detection method according to an embodiment of the present invention and device is set
It is standby to be realized in the equipment of personal computer or remote server etc..
In the following, depth information detection method according to an embodiment of the present invention will be described with reference to Fig. 2.Fig. 2 shows according to this hair
The schematic flow chart of the depth information detection method 200 of bright one embodiment.As shown in Fig. 2, depth information detection method 200
Include the following steps S210, S220, S230 and S240.
In step S210, two images acquired respectively by two image collecting devices are obtained.
Image as described herein can be any image comprising object.Object as described herein can be any object,
Including but not limited to:A part (face) of people or human body, animal, vehicle, building etc..
Image as described herein can be still image, the video frame being also possible in video.Image as described herein can
To be original image that image acquisition device arrives, it is also possible to pre-process original image (such as digitlization, normalizing
Change, be smooth etc.) after the image that obtains.
Compare it is appreciated that two images are the images of two image collecting devices while acquisition.It is appreciated that so-called same
When can be time difference between the acquisition time of two images in preset range [0, t], t can be preset time threshold
Value, such as 10 milliseconds.Illustratively, two image collecting devices can be two imaging sensors.Optionally, two images are adopted
Acquisition means can be placed side by side in the horizontal direction, separated by a distance therebetween.
In step S220, target detection is carried out to two images respectively, includes with each image in determining two images
Target object.
Target object refers to the object of certain predefined type, such as face.It can be using any suitable existing or future
The algorithm of target detection being likely to occur detects the target object that each image includes.Detected target object namely detects target pair
As the position at place.The position of each target object can be indicated with a bounding box (bounding box).Target detection
As a result it can be the location information of several bounding boxes, the location information of each bounding box can be the position coordinates of bounding box.Example
Such as, bounding box can be rectangle frame, and the position coordinates of bounding box can be four coordinate values of bounding box, for example, the boundary
The upper left corner abscissa x of frame, upper left corner ordinate y, the width w of bounding box, bounding box height h.
Illustratively, target detection can be realized using any suitable neural network model, such as convolutional neural networks.
For example, any image can be inputted to neural network model, neural network model can export each target pair in the image
The object location information (such as position coordinates of the bounding box of target object) of elephant.It illustratively, can will include target object
Image zooming-out be tensor form, obtain image tensor, which can represent the original figure comprising target object
Picture.Image is inputted into neural network model, can be and above-mentioned image tensor is inputted into neural network model.
In step S230, the target object for including in two images is matched, is matched with determining in two images
Same object to be measured object.
Illustratively, any two target object for including in two images can be compared, judges two mesh
Mark whether object is same target object.All target objects for including in two images can be matched one by one.It then, can be with
Select an object to be measured object.Due to previously having been detected by the position for each target object for including in two images, to
Survey after target object determines, object to be measured object position in both images you can learn that.
Illustratively, it can be selected from the target object that any image (the first image) in two images includes first
One object to be measured object, then, the target object that can include by object to be measured object and another image (the second image) is one by one
It compares, finds out the target object with the object to be measured object matching.Include due to previously having been detected by two images
Each target object position, therefore after object to be measured object in two images determines, the object to be measured object is at two
Position in image you can learn that.
In one example, step S230 may include:Calculate any two target object that two images separately include
Difference between the position in respective image, if difference is less than preset threshold, it is determined that two target objects are same targets
Object;The object to be measured object to match is selected from the target object that two images include.
It as described above, illustratively, can be using the target object in algorithm of target detection detection image, testing result
It can be the position coordinates of bounding box that is relevant to each target object, being used to indicate each target object position.?
It in this case, can be by image I1In target object X bounding box and image I2In target object Y bounding box carry out
Comparison calculates the difference between the coordinate of the two.For example, the center point coordinate and target of the bounding box of target object X can be calculated
Difference between the center point coordinate of the bounding box of object Y, if the difference is less than a certain preset threshold, it may be considered that target object
X and target object Y is same target object.In this way, image I can be determined1With image I2In identical target pair
As.Then, a certain identical target object can be selected as object to be measured object from two images.
Under normal circumstances, relatively close apart for detecting two image collecting devices of depth information, such as binocular camera
In two imaging sensors, therefore position of the same target object in two images that two image collecting devices acquire respectively
Gap between setting is also smaller, therefore can judge whether to be same target object based on the position of target object.It is this right
As the calculation amount of matching way is small, arithmetic speed is fast, is advantageously implemented quick depth information detection.
In another example, step S230 may include:The mesh that will include in two images using object matching network
Mark object is matched, with pair of same target object obtaining the output of object matching network, being used to indicate in two images
As match information;Based on object matching information, the object to be measured pair to match is selected from the target object that two images include
As.
Object matching network can be any suitable network, such as convolutional neural networks etc..It in one example, can be with
By the object location information of the target object in two images and two images, (such as the position of the bounding box of target object is sat
Mark) input object matching network, object matching network exports object matching information, and object matching information can indicate two images
In all target objects to match.In another example, can include by two extracted respectively from two images
The image block input object matching network of target object, object matching network, which exports the target object that two image blocks include, is
No is the judging result of same target object, i.e. object matching information.It can be determined in two images based on object matching information
Which target object be same target object, then can therefrom select object to be measured object.
The characteristic information of object itself, this match party have been used in such a way that object matching network carries out object matching
Formula is not limited by the distance of image collecting device, arranged direction etc., and no matter how image acquires, and high-accuracy may be implemented
Object matching.Therefore, be conducive to improve the accuracy rate of depth information detection using this object matching mode.
In step S240, location information based on object to be measured object in both images determines object to be measured object
Depth information.
The location information of object to be measured object may include the object location information of object to be measured object, such as object to be measured
The position coordinates of the bounding box of object.The location information of object to be measured object can also include the key point of object to be measured object
Key point location information.Illustratively, it can detecte the position of the key point of object to be measured object in both images, and be based on
The position of the key point of object to be measured object in both images calculates parallax, and then calculate the key point of object to be measured object
Depth information.Then, the depth letter of object to be measured object can be determined based on the depth information of the key point of object to be measured object
Breath.
As described above, traditional binocular camera does not get a real idea of the content of image, and it is each right also not identify really
The position of elephant, only with simple digital image processing method, to infer the imaging position of same target in two images.And root
According to the depth information detection method of the embodiment of the present invention, first progress target detection, the position of each object in image is really determined
It sets.By target detection, really the content that image includes can be understood, the high semantic information in image can be extracted.
Compared with the detection mode of traditional binocular camera, understanding of the depth information detection method according to an embodiment of the present invention to image
It is more deep more acurrate, therefore the testing result of depth information is also more acurrate.
Depth information detection method according to an embodiment of the present invention, the content (such as face) for including practical to image first
Detected, target object matched based on testing result, be then based further on the target object that detects determine to
Survey the depth information of target object.Such process simulates the process of human eye depth perception substantially, can be improved in various environment
Under object matching correctness.Simultaneously because the object matching based on content understanding, thus it is not required for two cameras
It is close consistent that picture is imaged, so nor affecting on the depth to target object even if expanding the spacing of two cameras bigger
Degree information is calculated, and is conducive to the increase for detecting depth bounds in this way.Further, it is matched by being then based on content,
Two cameras even can also break the limitation being mounted side by side, can be using the big spacing installation of any angle.As long as two are taken the photograph
As the space geometry relationship of head installation and towards relationship it is known that obtaining the image that same target object is acquired in two cameras again
In position, it can the depth information of this target object is found out according to solid geometry relationship.
Illustratively, depth information detection method according to an embodiment of the present invention can be with memory and processor
It is realized in unit or system.
Depth information detection method according to an embodiment of the present invention can be deployed at Image Acquisition end, for example, in security protection
Application field can be deployed in the Image Acquisition end of access control system;In financial application field, can be deployed at personal terminal,
Smart phone, tablet computer, personal computer etc..
Alternatively, depth information detection method according to an embodiment of the present invention can also be deployed in server end with being distributed
At (or cloud) and personal terminal.For example, can obtain image in client, the image that client will acquire sends server to
It holds in (or cloud), depth information detection is carried out by server end (or cloud).
Fig. 3 shows the schematic flow chart of depth information detection method 300 in accordance with another embodiment of the present invention.Fig. 3
Shown in depth information detection method 300 step S310-S330 and S350 respectively with depth information detection method shown in Fig. 2
200 step S210-S240 is corresponding, and those skilled in the art can be with reference to above for depth information detection method 200
Description understands step S310-S330 and S350, and details are not described herein again.
As shown in figure 3, depth information detection method 300 can also include step S340 before step S350.In step
S340 carries out critical point detection to the object to be measured object in two images respectively, with determine one of object to be measured object or
The position of multiple key points in both images.In addition, step S350 may include:Key point is at two based on one or more
Position in image determines the depth information of object to be measured object.
Key point as described herein can be any characteristic point on target object, can be set as needed.Compare
It is appreciated that key point as described herein is predetermined, the characteristic point comprising certain high semantic information.For example, target pair
As for face, key point may include central point, nose, the mouth of the point on facial contour, the point on the eyebrow of left and right, left and right pupil
Point on lip, etc..Compared with the image slices vegetarian refreshments on basis, the key point of target object includes higher level semantic information,
Therefore target object can more accurately be identified.Therefore, it calculates the depth information of key point and thereby determines that target object
The calculation amount of the mode of depth information is small, higher to the computational accuracy of depth information.
Herein, it is mainly described using target object as face as example, however it is only example, target object
It can be other kinds of object.For example, target object can be pedestrian, the key of target object under pedestrian's monitoring scene
Point can be certain artis, such as elbow joint, knee joint etc. with pedestrian.In another example under vehicle monitoring scene, target
Object can be vehicle, and the key point of target object can be certain characteristic points of vehicles, such as the angle point of vehicle edge
Deng.
Each image can be detected using critical point detection method that is any suitable existing or being likely to occur in the future includes
Each target object key point.The key point of detected target object namely detects the position where each key point.Each
The key point location information of key point may include the coordinate of the key point on the image.
Illustratively, critical point detection can be realized using any suitable neural network model, such as convolutional Neural net
Network.For example, can be by the image block comprising target object or by the object's position of each target object in original image and image
Information input neural network model, neural network model can export the key of the key point of each target object in the image
Dot position information.
Illustratively, step S350 may include:For each of one or more key points, it is based on the key point
Position in both images determines parallax of the key point in two image collecting devices;For one or more crucial
Each of point, the depth information of the disparity computation key point based on the key point in two image collecting devices;With
And the depth information of key point determines the depth information of object to be measured object based on one or more.
For example, can be regarded according to the alternate position spike of the same key point (such as nose) of same face in two images
Difference, then pass through the distance of parallax and two image collecting devices, energy inverse goes out this key point (such as nose) and two images
The distance between center of acquisition device, the i.e. depth information of this key point.The depth information of each face can use the face
On a certain key point depth information indicate, or can with combine face on multiple key points depth information calculating obtain
The depth information obtained indicates.
In one embodiment, the number of one or more key points is one, based on one or more the depth of key point
Degree information determines that the depth information of object to be measured object may include:The depth information for determining key point is object to be measured object
Depth information.
In the case where two image collecting devices of object to be measured object distance are distant, the different of object to be measured object are closed
Gap between the depth of key point is little, and the depth of the depth of each key point and object to be measured object entirety is (such as to be measured
The depth of the center of gravity of target object) between gap it is also little, therefore can be indicated using the depth information of any key point
The depth information of object to be measured object.This mode detects that speed is fast, the depth information of detected object to be measured object
It is accurate to compare.
In another embodiment, the numbers of one or more key points is multiple, based on one or more key point
Depth information determines that the depth information of object to be measured object may include:In conjunction with the depth information of multiple key points, to be tied
Depth information after conjunction;Determine that the depth information after combining is the depth information of object to be measured object.
In the case where two image collecting devices of object to be measured object distance are closer, the different of object to be measured object are closed
Gap between the depth of key point is obvious, in this way, if directlying adopt the depth information of a certain key point to indicate to be measured
The depth information of target object, might have large error.In such a case, it is possible to the depth information of comprehensive multiple key points
To determine the depth information of object to be measured object.The mode of the depth information of comprehensive multiple key points can be arbitrary, for example,
An average value can be calculated in such a way that the depth information to multiple key points is weighted and averaged, using the average value as
The depth information of object to be measured object.The accuracy of depth information detection can be improved in this mode.
According to embodiments of the present invention, for each of one or more key points, schemed based on the key point at two
As the depth information of the disparity computation key point in acquisition device may include:
The depth information of each key point in one or more key points is calculated based on following formula:
Wherein, ZiFor the depth information of i-th of key point, f is the focal length of two image collecting devices, and B is two images
The centre distance of acquisition device, XRiAnd XTiImaging point of respectively i-th of the key point on two images is to the pre- of correspondence image
The distance of deckle edge, wherein XRi-XTiFor parallax of i-th of key point in two image collecting devices.
Illustratively, the depth information that key point can be calculated using principle of triangulation, i.e., by comparing two images
Between the coordinate difference (i.e. parallax) of identical key point imaging carry out the distance that inverse goes out key point.Fig. 4 shows the original of binocular ranging
Manage schematic diagram.As shown in figure 4, P is a certain key point on object to be measured object, ORWith OTIt is two image collecting devices respectively
Optical center, imaging point of the point P on the photoreceptor of two image collecting devices is respectively PRAnd PT(the imaging of image collecting device
Plane has been placed in front of camera lens after rotation), XRAnd XTRespectively PRAnd PTTo the distance of the left side edge of respective image, f is
The focal length of two image collecting devices, B are the centre distance of two image collecting devices, and Z is the depth information of key point P.It is false
Set up an office PRTo point PTDistance be dis, then:
Dis=B- (XR-XT)
According to similar triangle theory, can obtain:
Then, it can obtain:
In above formula, the focal length f of image collecting device and the centre distance B of image collecting device can be by demarcating
It arrives, therefore, as long as obtaining XR-XT(that is, the value of parallax d) can acquire the depth information of key point P.
In the above-described embodiments, two image collecting device parameter configurations having the same, i.e. the two have equal coke
Away from f.However this is only exemplary rather than limitation of the present invention, two image collecting devices can have different parameter configurations,
In this case, the calculation formula of the depth information of key point may change.
One or more feelings in target detection, critical point detection and object matching are being carried out using neural network model
Under condition, used neural network model can be trained in advance, it will be appreciated by those skilled in the art that these networks
Training method is repeated not to this herein.
According to a further aspect of the invention, a kind of depth information detection device is provided.Fig. 5 shows a reality according to the present invention
Apply the schematic block diagram of the depth information detection device 500 of example.
As shown in figure 5, depth information detection device 500 according to an embodiment of the present invention includes image collection module 510, mesh
Mark detection module 520, object matching module 530 and depth determining module 540.The modules can be executed respectively and above be tied
Close each step/function for the depth information detection method that Fig. 2-4 is described.Below only to the depth information detection device 500
The major function of each component is described, and omits the detail content having been described above.
Image collection module 510 is for obtaining two images acquired respectively by two image collecting devices.Image obtains
The program instruction that module 510 can store in 102 Running storage device 105 of processor in electronic equipment as shown in Figure 1 comes
It realizes.
Module of target detection 520 is for carrying out target detection to described two images respectively, with the described two images of determination
In each image target object for including.Module of target detection 520 can processor in electronic equipment as shown in Figure 1
The program instruction that stores in 102 Running storage devices 105 is realized.
Object matching module 530 is for matching the target object for including in described two images, described in determination
The same object to be measured object to match in two images.Object matching module 530 can be in electronic equipment as shown in Figure 1
The program instruction that stores in 102 Running storage device 105 of processor is realized.
Depth determining module 540 is for the location information based on the object to be measured object in described two images, really
The depth information of the fixed object to be measured object.Depth determining module 540 can processor in electronic equipment as shown in Figure 1
The program instruction that stores in 102 Running storage devices 105 is realized.
Illustratively, depth information detection device 500 further includes:Critical point detection module (not shown), in depth
Location information of the determining module 540 based on object to be measured object in both images, determines the depth information of object to be measured object
Before, critical point detection is carried out to the object to be measured object in two images respectively, with determine one of object to be measured object or
The position of multiple key points in both images;Depth determining module 540 includes:Depth determines submodule, for being based on one
Or the position of multiple key points in both images, determine the depth information of object to be measured object.
Illustratively, depth determines that submodule includes:Parallax determination unit, for in one or more key points
Each, the position based on the key point in both images determines parallax of the key point in two image collecting devices;
Depth calculation unit, for being based on the key point in two image collectors for each of one or more key points
The depth information of the disparity computation key point in setting;And depth determining unit, for key point based on one or more
Depth information determines the depth information of object to be measured object.
Illustratively, the number of one or more key points is one, and depth determining unit includes:First determines that son is single
Member, for determining that the depth information of key point is the depth information of object to be measured object.
Illustratively, the number of one or more key points is multiple, and depth determining unit includes:In conjunction with subelement, use
In the depth information for combining multiple key points, with the depth information after being combined;Second determines subelement, combines for determining
Depth information afterwards is the depth information of object to be measured object.
Illustratively, depth calculation unit is specifically used for:
The depth information of each key point in one or more key points is calculated based on following formula:
Wherein, ZiFor the depth information of i-th of key point, f is the focal length of two image collecting devices, and B is two images
The centre distance of acquisition device, XRiAnd XTiImaging point of respectively i-th of the key point on two images is to the pre- of correspondence image
The distance of deckle edge, wherein XRi-XTiFor parallax of i-th of key point in two image collecting devices.
Illustratively, object matching module 530 includes:Computational submodule, times separately included for calculating two images
It anticipates difference of two target objects between the position in respective image, if difference is less than preset threshold, it is determined that two targets
Object is same target object;First choice submodule matches for selecting from the target object that two images include
Object to be measured object.
Illustratively, object matching module 530 includes:Matched sub-block, for being schemed two using object matching network
As in include target object matched, with obtain object matching network output, be used to indicate it is same in two images
The object matching information of target object;Second selection submodule, for being based on object matching information, the mesh for including from two images
The object to be measured object to match is selected in mark object.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
Fig. 6 shows the schematic block diagram of depth information detection system 600 according to an embodiment of the invention.Depth letter
Ceasing detection system 600 includes image collecting device 610, storage device (i.e. memory) 620 and processor 630.
Described image acquisition device 610 is for acquiring image.Image collecting device 610 is optional, depth information detection
System 600 can not include image collecting device 610.In such a case, it is possible to utilize other image acquisition devices people
Face image, and the image of acquisition is sent to depth information detection system 600.
The storage of storage device 620 is for realizing corresponding in depth information detection method according to an embodiment of the present invention
The computer program instructions of step.
The processor 630 is for running the computer program instructions stored in the storage device 620, to execute basis
The corresponding steps of the depth information detection method of the embodiment of the present invention.
In one embodiment, for executing following step when the computer program instructions are run by the processor 630
Suddenly:Obtain two images acquired respectively by two image collecting devices;Target detection is carried out to two images respectively, with determination
The target object that each image in two images includes;The target object for including in two images is matched, with determination
The same object to be measured object to match in two images;And the position letter based on object to be measured object in both images
Breath, determines the depth information of object to be measured object.
Illustratively, depth information detection system 600 includes camera, and camera includes two imaging sensors, two images
Sensor is two image collecting devices.In the present embodiment, image collecting device 610 is imaging sensor.
Illustratively, when the computer program instructions are run by the processor 630 used execution based on to
Before the step of surveying the location information of target object in both images, determining the depth information of object to be measured object, the meter
Calculation machine program instruction is also used to execute following steps when being run by the processor 630:Respectively to the mesh to be measured in two images
It marks object and carries out critical point detection, to determine the position of one or more key points of object to be measured object in both images;
The computer program instructions when being run by the processor 630 used execution based on object to be measured object in two images
In location information, the step of determining the depth information of object to be measured object includes:Key point is at two based on one or more
Position in image determines the depth information of object to be measured object.
Illustratively, used execution based on one when the computer program instructions are run by the processor 630
Or the position of multiple key points in both images, the step of determining the depth information of object to be measured object, include:For one
Or each of multiple key points, the position based on the key point in both images, determine the key point in two images
Parallax in acquisition device;For each of one or more key points, based on the key point in two image collectors
The depth information of the disparity computation key point in setting;And the depth information of key point determines mesh to be measured based on one or more
Mark the depth information of object.
Illustratively, the number of one or more key points is one, and the computer program instructions are by the processor
The depth information of the key point based on one or more of used execution determines the depth information of object to be measured object when 630 operation
The step of include:The depth information for determining key point is the depth information of object to be measured object.
Illustratively, the number of one or more key points is multiple, and the computer program instructions are by the processor
The depth information of the key point based on one or more of used execution determines the depth information of object to be measured object when 630 operation
The step of include:In conjunction with the depth information of multiple key points, with the depth information after being combined;Determine the depth letter after combining
Breath is the depth information of object to be measured object.
Illustratively, for each of one or more key points, the computer program instructions are by the processing
The depth of the disparity computation key point based on the key point in two image collecting devices of used execution when device 630 is run
Spend information the step of include:
The depth information of each key point in one or more key points is calculated based on following formula:
Wherein, ZiFor the depth information of i-th of key point, f is the focal length of two image collecting devices, and B is two images
The centre distance of acquisition device, XRiAnd XTiImaging point of respectively i-th of the key point on two images is to the pre- of correspondence image
The distance of deckle edge, wherein XRi-XTiFor parallax of i-th of key point in two image collecting devices.
Illustratively, used execution schemes two when the computer program instructions are run by the processor 630
As in include target object matched, with determine two images in match same object to be measured object the step of packet
It includes:The difference of any two target object that two images separately include between the position in respective image is calculated, if difference is small
In preset threshold, it is determined that two target objects are same target objects;It is selected from the target object that two images include
The object to be measured object to match.
Illustratively, used execution schemes two when the computer program instructions are run by the processor 630
As in include target object matched, with determine two images in match same object to be measured object the step of packet
It includes:The target object for including in two images is matched using object matching network, to obtain the output of object matching network
, the object matching information of the same target object being used to indicate in two images;Based on object matching information, from two images
The object to be measured object to match is selected in the target object for including.
In addition, according to embodiments of the present invention, additionally providing a kind of storage medium, storing program on said storage
Instruction, when described program instruction is run by computer or processor for executing the depth information detection side of the embodiment of the present invention
The corresponding steps of method, and for realizing the corresponding module in depth information detection device according to an embodiment of the present invention.It is described
Storage medium for example may include the hard disk, read-only of the storage card of smart phone, the storage unit of tablet computer, personal computer
Memory (ROM), Erasable Programmable Read Only Memory EPROM (EPROM), portable compact disc read-only memory (CD-ROM), USB
Any combination of memory or above-mentioned storage medium.
In one embodiment, described program instruction can make computer or place when being run by computer or processor
Reason device realizes each functional module of depth information detection device according to an embodiment of the present invention, and and/or can execute root
According to the depth information detection method of the embodiment of the present invention.
In one embodiment, described program instruction is at runtime for executing following steps:Acquisition is adopted by two images
Two images that acquisition means acquire respectively;Target detection is carried out to two images respectively, to determine each figure in two images
As comprising target object;The target object for including in two images is matched, with what is matched in determining two images
Same object to be measured object;And the location information based on object to be measured object in both images, determine object to be measured object
Depth information.
Illustratively, described program instruction at runtime it is used execute based on object to be measured object in two images
In location information, before the step of determining the depth information of object to be measured object, described program instruction is also used at runtime
Execute following steps:Critical point detection is carried out to the object to be measured object in two images respectively, to determine object to be measured object
One or more key points position in both images;Described program instruction at runtime it is used execute based on to be measured
The location information of target object in both images, the step of determining the depth information of object to be measured object include:Based on one
Or the position of multiple key points in both images, determine the depth information of object to be measured object.
Illustratively, the used key point based on one or more executed is schemed at two at runtime for described program instruction
Position as in, the step of determining the depth information of object to be measured object include:For each in one or more key points
A, based on the key point in both images position determines parallax of the key point in two image collecting devices;For
Each of one or more key points, the disparity computation key point based on the key point in two image collecting devices
Depth information;And the depth information of key point determines the depth information of object to be measured object based on one or more.
Illustratively, the number of one or more key points is one, and described program instruction is used at runtime to be executed
The depth information of key point based on one or more the step of determining the depth information of object to be measured object include:It determines crucial
The depth information of point is the depth information of object to be measured object.
Illustratively, the number of one or more key points is multiple, and described program instruction is used at runtime to be executed
The depth information of key point based on one or more the step of determining the depth information of object to be measured object include:In conjunction with multiple
The depth information of key point, with the depth information after being combined;Determine that the depth information after combining is object to be measured object
Depth information.
Illustratively, for each of one or more key points, described program instruction is used at runtime to be held
The step of depth information of the capable disparity computation key point based on the key point in two image collecting devices includes:
The depth information of each key point in one or more key points is calculated based on following formula:
Wherein, ZiFor the depth information of i-th of key point, f is the focal length of two image collecting devices, and B is two images
The centre distance of acquisition device, XRiAnd XTiImaging point of respectively i-th of the key point on two images is to the pre- of correspondence image
The distance of deckle edge, wherein XRi-XTiFor parallax of i-th of key point in two image collecting devices.
Illustratively, described program instruction at runtime it is used execute by the target object for including in two images into
Row matches, and includes the step of the same object to be measured object to match in two images to determine:Two images are calculated to wrap respectively
Difference of any two target object contained between the position in respective image, if difference is less than preset threshold, it is determined that this two
A target object is same target object;The object to be measured object to match is selected from the target object that two images include.
Illustratively, described program instruction at runtime it is used execute by the target object for including in two images into
Row matches, and includes the step of the same object to be measured object to match in two images to determine:It will using object matching network
The target object for including in two images is matched, with obtain object matching network output, be used to indicate in two images
Same target object object matching information;Based on object matching information, selected from the target object that two images include
The object to be measured object to match.
Each module in depth information detection system according to an embodiment of the present invention can be by according to embodiments of the present invention
The processor computer program instructions that store in memory of operation of electronic equipment of implementation depth information detection realize,
Or the computer that can be stored in the computer readable storage medium of computer program product according to an embodiment of the present invention
Realization when instruction is run by computer.
Although describing example embodiment by reference to attached drawing here, it should be understood that above example embodiment are only exemplary
, and be not intended to limit the scope of the invention to this.Those of ordinary skill in the art can carry out various changes wherein
And modification, it is made without departing from the scope of the present invention and spiritual.All such changes and modifications are intended to be included in appended claims
Within required the scope of the present invention.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.For example, apparatus embodiments described above are merely indicative, for example, the division of the unit, only
Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be tied
Another equipment is closed or is desirably integrated into, or some features can be ignored or not executed.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention
Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the present invention and help to understand one or more of the various inventive aspects,
To in the description of exemplary embodiment of the present invention, each feature of the invention be grouped together into sometimes single embodiment, figure,
Or in descriptions thereof.However, the method for the invention should not be construed to reflect following intention:It is i.e. claimed
The present invention claims features more more than feature expressly recited in each claim.More precisely, such as corresponding power
As sharp claim reflects, inventive point is that the spy of all features less than some disclosed single embodiment can be used
Sign is to solve corresponding technical problem.Therefore, it then follows thus claims of specific embodiment are expressly incorporated in this specific
Embodiment, wherein each, the claims themselves are regarded as separate embodiments of the invention.
It will be understood to those skilled in the art that any combination pair can be used other than mutually exclusive between feature
All features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed any method
Or all process or units of equipment are combined.Unless expressly stated otherwise, this specification (is wanted including adjoint right
Ask, make a summary and attached drawing) disclosed in each feature can be replaced with an alternative feature that provides the same, equivalent, or similar purpose.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention
Within the scope of and form different embodiments.For example, in detail in the claims, embodiment claimed it is one of any
Can in any combination mode come using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors
Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice
Microprocessor or digital signal processor (DSP) realize one in depth information detection device according to an embodiment of the present invention
The some or all functions of a little modules.The present invention be also implemented as a part for executing method as described herein or
The program of device (for example, computer program and computer program product) of person's whole.It is such to realize that program of the invention be with
It may be stored on the computer-readable medium, or may be in the form of one or more signals.Such signal can from because
It downloads and obtains on spy's net website, be perhaps provided on the carrier signal or be provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability
Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not
Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such
Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real
It is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branch
To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame
Claim.
The above description is merely a specific embodiment or to the explanation of specific embodiment, protection of the invention
Range is not limited thereto, and anyone skilled in the art in the technical scope disclosed by the present invention, can be easily
Expect change or replacement, should be covered by the protection scope of the present invention.Protection scope of the present invention should be with claim
Subject to protection scope.
Claims (12)
1. a kind of depth information detection method, including:
Obtain two images acquired respectively by two image collecting devices;
Target detection, the target pair for including with each image in the described two images of determination are carried out to described two images respectively
As;
The target object for including in described two images is matched, with match in the described two images of determination it is same to
Survey target object;And
Location information based on the object to be measured object in described two images determines the depth of the object to be measured object
Information.
2. the method for claim 1, wherein
In the location information based on the object to be measured object in described two images, the object to be measured object is determined
Depth information before, the method also includes:
Critical point detection is carried out to the object to be measured object in described two images respectively, with the determination object to be measured pair
Position of the one or more key points of elephant in described two images;
The location information based on the object to be measured object in described two images, determines the object to be measured object
Depth information includes:
Position based on one or more of key points in described two images determines the depth of the object to be measured object
Information.
3. method according to claim 2, wherein described to be based on one or more of key points in described two images
Position, determine that the depth information of the object to be measured object includes:
For each of one or more of key points,
Position based on the key point in described two images determines the key point in described two image collecting devices
Parallax;
The depth information of the disparity computation key point based on the key point in described two image collecting devices;And
The depth information of the object to be measured object is determined based on the depth information of one or more of key points.
4. method as claimed in claim 3, wherein the number of one or more of key points is one, described to be based on institute
The depth information for stating one or more key points determines that the depth information of the object to be measured object includes:
The depth information for determining the key point is the depth information of the object to be measured object.
5. method as claimed in claim 3, wherein the number of one or more of key points be it is multiple, it is described be based on institute
The depth information for stating one or more key points determines that the depth information of the object to be measured object includes:
In conjunction with the depth information of the multiple key point, with the depth information after being combined;
Depth information after determining the combination is the depth information of the object to be measured object.
6. method as claimed in claim 3, wherein it is described for each of one or more of key points, it is based on
The depth information of the disparity computation key point of the key point in described two image collecting devices includes:
The depth information of each key point in one or more of key points is calculated based on following formula:
Wherein, ZiFor the depth information of i-th of key point, f is the focal length of described two image collecting devices, and B is described two figures
As the centre distance of acquisition device, XRiAnd XTiImaging point of respectively i-th of the key point on described two images is to corresponding diagram
The distance of the predetermined edge of picture, wherein XRi-XTiFor parallax of i-th of key point in described two image collecting devices.
7. the method for claim 1, wherein described match the target object for including in described two images,
Include with the same object to be measured object to match in the described two images of determination:
The difference of any two target object that described two images separately include between the position in respective image is calculated, if
The difference is less than preset threshold, it is determined that two target objects are same target objects;
The object to be measured object to match is selected from the target object that described two images include.
8. the method for claim 1, wherein described match the target object for including in described two images,
Include with the same object to be measured object to match in the described two images of determination:
The target object for including in described two images is matched using object matching network, to obtain the object matching
The object matching information of same target object that network exports, being used to indicate in described two images;
Based on the object matching information, the mesh to be measured to match is selected from the target object that described two images include
Mark object.
9. a kind of depth information detection device, including:
Image collection module, for obtaining two images acquired respectively by two image collecting devices;
Module of target detection, for carrying out target detection to described two images respectively, with every in the described two images of determination
The target object that a image includes;
Object matching module, for matching the target object for including in described two images, with the described two figures of determination
The same object to be measured object to match as in;And
Depth determining module, for the location information based on the object to be measured object in described two images, determine described in
The depth information of object to be measured object.
10. a kind of depth information detection system, including processor and memory, wherein be stored with computer in the memory
Program instruction, for executing as described in any one of claim 1 to 8 when the computer program instructions are run by the processor
Depth information detection method.
11. system as claimed in claim 10, wherein the depth information detection system includes camera, and the camera includes
Two imaging sensors, described two imaging sensors are described two image collecting devices.
12. a kind of storage medium stores program instruction on said storage, described program instruction is at runtime for holding
Row depth information detection method as claimed in any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810262054.3A CN108876835A (en) | 2018-03-28 | 2018-03-28 | Depth information detection method, device and system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810262054.3A CN108876835A (en) | 2018-03-28 | 2018-03-28 | Depth information detection method, device and system and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108876835A true CN108876835A (en) | 2018-11-23 |
Family
ID=64326155
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810262054.3A Pending CN108876835A (en) | 2018-03-28 | 2018-03-28 | Depth information detection method, device and system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108876835A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109300154A (en) * | 2018-11-27 | 2019-02-01 | 郑州云海信息技术有限公司 | A kind of distance measuring method and device based on binocular solid |
CN110505403A (en) * | 2019-08-20 | 2019-11-26 | 维沃移动通信有限公司 | A kind of video record processing method and device |
CN111457886A (en) * | 2020-04-01 | 2020-07-28 | 北京迈格威科技有限公司 | Distance determination method, device and system |
WO2020199563A1 (en) * | 2019-04-01 | 2020-10-08 | 四川深瑞视科技有限公司 | Method, device, and system for detecting depth information |
CN111753611A (en) * | 2019-08-30 | 2020-10-09 | 北京市商汤科技开发有限公司 | Image detection method, device and system, electronic equipment and storage medium |
CN112153306A (en) * | 2020-09-30 | 2020-12-29 | 深圳市商汤科技有限公司 | Image acquisition system, method and device, electronic equipment and wearable equipment |
CN113159161A (en) * | 2021-04-16 | 2021-07-23 | 深圳市商汤科技有限公司 | Target matching method and device, equipment and storage medium |
CN113345000A (en) * | 2021-06-28 | 2021-09-03 | 北京市商汤科技开发有限公司 | Depth detection method and device, electronic equipment and storage medium |
CN113673569A (en) * | 2021-07-21 | 2021-11-19 | 浙江大华技术股份有限公司 | Target detection method, target detection device, electronic equipment and storage medium |
WO2022110877A1 (en) * | 2020-11-24 | 2022-06-02 | 深圳市商汤科技有限公司 | Depth detection method and apparatus, electronic device, storage medium and program |
CN116958931A (en) * | 2023-07-20 | 2023-10-27 | 山东产研鲲云人工智能研究院有限公司 | Method and computing device for vehicle collision early warning in warehouse |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102316307A (en) * | 2011-08-22 | 2012-01-11 | 安防科技(中国)有限公司 | Road traffic video detection method and apparatus thereof |
CN105023010A (en) * | 2015-08-17 | 2015-11-04 | 中国科学院半导体研究所 | Face living body detection method and system |
US20150381972A1 (en) * | 2014-06-30 | 2015-12-31 | Microsoft Corporation | Depth estimation using multi-view stereo and a calibrated projector |
CN105814401A (en) * | 2013-12-16 | 2016-07-27 | 索尼公司 | Image processing device, image processing method, and imaging device |
CN106033621A (en) * | 2015-03-17 | 2016-10-19 | 阿里巴巴集团控股有限公司 | Three-dimensional modeling method and device |
CN106780590A (en) * | 2017-01-03 | 2017-05-31 | 成都通甲优博科技有限责任公司 | The acquisition methods and system of a kind of depth map |
CN106952304A (en) * | 2017-03-22 | 2017-07-14 | 南京大学 | A kind of depth image computational methods of utilization video sequence interframe correlation |
CN107316326A (en) * | 2017-06-29 | 2017-11-03 | 海信集团有限公司 | Applied to disparity map computational methods of the binocular stereo vision based on side and device |
US20180041747A1 (en) * | 2016-08-03 | 2018-02-08 | Samsung Electronics Co., Ltd. | Apparatus and method for processing image pair obtained from stereo camera |
-
2018
- 2018-03-28 CN CN201810262054.3A patent/CN108876835A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102316307A (en) * | 2011-08-22 | 2012-01-11 | 安防科技(中国)有限公司 | Road traffic video detection method and apparatus thereof |
CN105814401A (en) * | 2013-12-16 | 2016-07-27 | 索尼公司 | Image processing device, image processing method, and imaging device |
US20150381972A1 (en) * | 2014-06-30 | 2015-12-31 | Microsoft Corporation | Depth estimation using multi-view stereo and a calibrated projector |
CN106033621A (en) * | 2015-03-17 | 2016-10-19 | 阿里巴巴集团控股有限公司 | Three-dimensional modeling method and device |
CN105023010A (en) * | 2015-08-17 | 2015-11-04 | 中国科学院半导体研究所 | Face living body detection method and system |
US20180041747A1 (en) * | 2016-08-03 | 2018-02-08 | Samsung Electronics Co., Ltd. | Apparatus and method for processing image pair obtained from stereo camera |
CN106780590A (en) * | 2017-01-03 | 2017-05-31 | 成都通甲优博科技有限责任公司 | The acquisition methods and system of a kind of depth map |
CN106952304A (en) * | 2017-03-22 | 2017-07-14 | 南京大学 | A kind of depth image computational methods of utilization video sequence interframe correlation |
CN107316326A (en) * | 2017-06-29 | 2017-11-03 | 海信集团有限公司 | Applied to disparity map computational methods of the binocular stereo vision based on side and device |
Non-Patent Citations (2)
Title |
---|
邹建成 等: "《数学及其在图像处理中的应用》", 31 July 2015, 北京:北京邮电大学出版社 * |
马晓路 等: "《MATLAB图像处理从入门到精通》", 28 February 2013, 北京:中国铁道出版社 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109300154A (en) * | 2018-11-27 | 2019-02-01 | 郑州云海信息技术有限公司 | A kind of distance measuring method and device based on binocular solid |
WO2020199563A1 (en) * | 2019-04-01 | 2020-10-08 | 四川深瑞视科技有限公司 | Method, device, and system for detecting depth information |
CN110505403A (en) * | 2019-08-20 | 2019-11-26 | 维沃移动通信有限公司 | A kind of video record processing method and device |
CN111753611A (en) * | 2019-08-30 | 2020-10-09 | 北京市商汤科技开发有限公司 | Image detection method, device and system, electronic equipment and storage medium |
CN111457886A (en) * | 2020-04-01 | 2020-07-28 | 北京迈格威科技有限公司 | Distance determination method, device and system |
CN111457886B (en) * | 2020-04-01 | 2022-06-21 | 北京迈格威科技有限公司 | Distance determination method, device and system |
CN112153306B (en) * | 2020-09-30 | 2022-02-25 | 深圳市商汤科技有限公司 | Image acquisition system, method and device, electronic equipment and wearable equipment |
CN112153306A (en) * | 2020-09-30 | 2020-12-29 | 深圳市商汤科技有限公司 | Image acquisition system, method and device, electronic equipment and wearable equipment |
WO2022110877A1 (en) * | 2020-11-24 | 2022-06-02 | 深圳市商汤科技有限公司 | Depth detection method and apparatus, electronic device, storage medium and program |
CN113159161A (en) * | 2021-04-16 | 2021-07-23 | 深圳市商汤科技有限公司 | Target matching method and device, equipment and storage medium |
WO2022218161A1 (en) * | 2021-04-16 | 2022-10-20 | 上海商汤智能科技有限公司 | Method and apparatus for target matching, device, and storage medium |
CN113345000A (en) * | 2021-06-28 | 2021-09-03 | 北京市商汤科技开发有限公司 | Depth detection method and device, electronic equipment and storage medium |
WO2023273499A1 (en) * | 2021-06-28 | 2023-01-05 | 上海商汤智能科技有限公司 | Depth measurement method and apparatus, electronic device, and storage medium |
CN113673569A (en) * | 2021-07-21 | 2021-11-19 | 浙江大华技术股份有限公司 | Target detection method, target detection device, electronic equipment and storage medium |
CN116958931A (en) * | 2023-07-20 | 2023-10-27 | 山东产研鲲云人工智能研究院有限公司 | Method and computing device for vehicle collision early warning in warehouse |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108876835A (en) | Depth information detection method, device and system and storage medium | |
CN109711243B (en) | Static three-dimensional face in-vivo detection method based on deep learning | |
US10198623B2 (en) | Three-dimensional facial recognition method and system | |
CN108875524B (en) | Sight estimation method, device, system and storage medium | |
CN109740491B (en) | Human eye sight recognition method, device, system and storage medium | |
CN107256377B (en) | Method, device and system for detecting object in video | |
US8989455B2 (en) | Enhanced face detection using depth information | |
CN106033601B (en) | The method and apparatus for detecting abnormal case | |
JP5024067B2 (en) | Face authentication system, method and program | |
JP6544900B2 (en) | Object identification device, object identification method and program | |
KR101362631B1 (en) | Head recognition method | |
EP3241151A1 (en) | An image face processing method and apparatus | |
KR20160029629A (en) | Method and apparatus for face recognition | |
JP2020529685A5 (en) | ||
CN105574525A (en) | Method and device for obtaining complex scene multi-mode biology characteristic image | |
US10915739B2 (en) | Face recognition device, face recognition method, and computer readable storage medium | |
CN111008935B (en) | Face image enhancement method, device, system and storage medium | |
KR20150065445A (en) | Apparatus and method for detecting frontal face image using facial pose | |
KR20150128510A (en) | Apparatus and method for liveness test, and apparatus and method for image processing | |
CN108875509A (en) | Biopsy method, device and system and storage medium | |
RU2370817C2 (en) | System and method for object tracking | |
CN114170690A (en) | Method and device for living body identification and construction of living body identification model | |
CN112801038B (en) | Multi-view face in-vivo detection method and system | |
CN111027434B (en) | Training method and device of pedestrian recognition model and electronic equipment | |
US7653219B2 (en) | System and method for image attribute recording an analysis for biometric applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181123 |