CN106408633A - Object imaging method - Google Patents
Object imaging method Download PDFInfo
- Publication number
- CN106408633A CN106408633A CN201610298284.6A CN201610298284A CN106408633A CN 106408633 A CN106408633 A CN 106408633A CN 201610298284 A CN201610298284 A CN 201610298284A CN 106408633 A CN106408633 A CN 106408633A
- Authority
- CN
- China
- Prior art keywords
- profile
- ultrasonic wave
- gray scale
- wave module
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of object imaging and particularly relates to an object imaging method. The method comprises the following steps of obtaining contour grey scale background images and three-dimensional contour grey scale images of an object; and matching the contour grey scale background images and the three-dimensional contour grey scale images of the object to generate a three-dimensional comprehensive image of the object. With the method, the movement picture and the movement distance of the object can be precisely detected, so that the identification and detection ratio of objects on a streamline at night can be improved, and a robot is enabled to freely avoid obstacles according to the high-precision three-dimensional comprehensive image of the object.
Description
Technical field
The present invention relates to image objects field, more particularly, to a kind of object imaging method.
Background technology
What image objects device was more applies in industrial circle, such as, on some parts production line balance chains, passes through
Image objects device gathers profile construction and the distance of parts, so that control machinery hand crawl parts.
Prior art discloses a kind of image objects device, and this image objects device shoots the background of object by camera
Figure, then analyzes each imaging features in this Background, thus generating synthetic image.
Inventor, during realizing the present invention, finds that prior art at least has problems with:Due to existing imaging
Device generally uses binocular range finding, measures the data precision coming and is forbidden, stability is not strong, the object deviation ratio of identification is high, calculates
Distance out has error with actual.
Content of the invention
In order to overcome above-mentioned technical problem, the purpose of the present invention aims to provide a kind of object imaging method and its device, machine
Device people, which solves the low technical problem of prior art imaging precision.
For solving above-mentioned technical problem, the present invention provides technical scheme below:
In a first aspect, the embodiment of the present invention provides a kind of object imaging method, it comprises the following steps:
Obtain profile gray scale Background and the three-D profile gray-scale map of object;
Mate profile gray scale Background and the three-D profile gray-scale map of described object, generate the three-dimensional comprehensive image of object.
Alternatively, the described contoured background figure obtaining object, specifically includes:
Obtain the object image data of camera module collection;
Described object image data is stored into the view data of gray scale type;
According to the view data of described gray scale type, extract the edge wheel profile of described object;
Generate the profile gray scale Background of described object.
Alternatively, the described three-D profile gray-scale map obtaining object, specifically includes:
Obtain the dot chart of object;
The iso-surface patch described dot chart being carried out in medical image three-dimensional drafting is processed;
Generate the three-D profile gray-scale map of described object.
Alternatively, the described dot chart obtaining object, specifically includes:
Obtain the contour of object information of ultrasonic array collection;Described ultrasonic array includes N number of ultrasonic wave module, N=
NX*NY;Wherein, NXFor the ultrasonic wave module number of every row, NYFor the ultrasonic wave module number of each column, N, NXAnd NYIt is just whole
Number;It is in the N in every rowXIndividual ultrasonic wave module forms a collecting window;Wherein, each collecting window height in a column direction is
Equal, it is equal in the length of line direction;It is in the N of each described collecting windowXIndividual ultrasonic wave module in a column direction, and
And there is the first level altitude value of different rises according to row order;Wherein, between each first level altitude value away from
It is equal from value;
Obtain the maximum range value between the ultrasonic wave module that each ultrasonic wave module detects and object;
Association and the described first level altitude value of storage and described maximum range value, obtain coordinate value;
Read described coordinate value, point function is drawn by screen, the coordinate system of display screen draws on corresponding coordinate
Described coordinate value;
By screen setting-out function, connect each coordinate points on display screen, generate the dot chart of object.
Alternatively, the described profile gray scale Background mating described object and three-D profile gray-scale map, specifically include:
Dust stratification degree matching treatment is normalized to the profile gray scale Background and three-D profile gray-scale map of described object.
In second aspect, the embodiment of the present invention provides a kind of image objects device, and it includes:
Camera module, for shooting object, and produces object image data;
Ultrasonic array, for gathering contour of object information;
Microprocessor, for according to described object image data and described contour of object information, obtaining the profile ash of object
Degree Background and three-D profile gray-scale map, and mate profile gray scale Background and the three-D profile gray-scale map of described object, raw
Become the three-dimensional comprehensive image of object.
Alternatively, described microprocessor specifically for:Described object image data is stored into the picture number of gray scale type
According to according to the view data of described gray scale type, the edge wheel profile of the described object of extraction, the profile gray scale of the described object of generation
Background.
Alternatively, described microprocessor specifically for:Obtain the dot chart of object, medical image is carried out to described dot chart
Iso-surface patch in 3 D rendering is processed, and generates the three-D profile gray-scale map of described object.
Alternatively, described microprocessor specifically for:
Obtain the contour of object information of ultrasonic array collection;Described ultrasonic array includes N number of ultrasonic wave module, N=
NX*NY;Wherein, NXFor the ultrasonic wave module number of every row, NYFor the ultrasonic wave module number of each column, N, NXAnd NYIt is just whole
Number;It is in the N in every rowXIndividual ultrasonic wave module forms a collecting window;Wherein, each collecting window height in a column direction is
Equal, it is equal in the length of line direction;It is in the N of each described collecting windowXIndividual ultrasonic wave module in a column direction, and
And there is the first level altitude value of different rises according to row order;Wherein, between each first level altitude value away from
It is equal from value;
Obtain the maximum range value between the ultrasonic wave module that each ultrasonic wave module detects and object;
Association and the described first level altitude value of storage and described maximum range value, obtain coordinate value;
Read described coordinate value, point function is drawn by screen, the coordinate system of display screen draws on corresponding coordinate
Described coordinate value;
By screen setting-out function, connect each coordinate points on display screen, generate the dot chart of object.
In the third aspect, the embodiment of the present invention provides a kind of robot, and it includes:
Camera module, for shooting object, and produces object image data;
Ultrasonic array, for gathering contour of object information;
Microprocessor, for according to described object image data and described contour of object information, obtaining the profile ash of object
Degree Background and three-D profile gray-scale map, and mate profile gray scale Background and the three-D profile gray-scale map of described object, raw
Become the three-dimensional comprehensive image of object;
Steering control device, for the control instruction according to microprocessor, controls described camera module and/or described super
The steering of acoustic wave array;
Sensor assembly, for gathering the current pose of described robot, exports attitude data;Described microprocessor according to
Described attitude data exports described control instruction.
Alternatively, described robot also includes:
Wireless communication module, for being uploaded to intelligent terminal and/or server by the data of described microprocessor:
Display apparatus module, for showing the three-dimensional comprehensive image of the object of described microprocessor generation.
In embodiments of the present invention, by obtaining profile gray scale Background and the three-D profile gray-scale map of object, mate institute
State profile gray scale Background and the three-D profile gray-scale map of object, thus generating the three-dimensional comprehensive image of high-precision object.Adopt
With the method, it can accurately detect picture and the distance of object movement, thus improving night object on streamline
Recognition detection rate and make robot flexibly hide obstacle according to the three-dimensional comprehensive image of high-precision object.
Brief description
Fig. 1 is the schematic flow sheet of object imaging method provided in an embodiment of the present invention;
Fig. 2 is the schematic flow sheet of the profile gray scale Background obtaining object provided in an embodiment of the present invention;
Fig. 2 a is the schematic diagram of the profile gray scale Background generating described object provided in an embodiment of the present invention;
Fig. 3 is the schematic flow sheet of the three-D profile gray-scale map obtaining object provided in an embodiment of the present invention;
Fig. 4 is the schematic flow sheet of the dot chart obtaining object provided in an embodiment of the present invention;
Fig. 4 a is the schematic diagram of the display window that the embodiment of the present invention provides display screen;
Fig. 4 b is the schematic diagram of the collecting window of ultrasonic array composition provided in an embodiment of the present invention;
Fig. 4 c is the schematic diagram of the outline line composition that ultrasonic array provided in an embodiment of the present invention catches mobile object;
Fig. 4 d is the schematic diagram of the image objects of the embodiment of the present invention;
Fig. 5 is that the embodiment of the present invention provides a kind of image objects device;
Fig. 6 is a kind of circuit structure theory diagram of robot provided in an embodiment of the present invention;
Fig. 7 is a kind of structural representation of robot provided in an embodiment of the present invention.
Specific embodiment
In order that the objects, technical solutions and advantages of the present invention become more apparent, below in conjunction with drawings and Examples, right
The present invention is further elaborated.It should be appreciated that specific embodiment described herein is only in order to explain the present invention, not
For limiting the present invention.
Refer to Fig. 1, Fig. 1 is the schematic flow sheet of object imaging method provided in an embodiment of the present invention.As shown in figure 1,
The method comprises the following steps:
S11, the profile gray scale Background obtaining object and three-D profile gray-scale map;
In this step S11, the mode obtaining the profile gray scale Background of object can be using laser imaging mode, to utilize
The laser beam of angle of divergence very little is irradiated to formation laser measurement point on object, receives the reflection from measurement point using point probe
Or the laser signal of scattering, target range is obtained by inverting, thus being finally inversed by the profile gray scale Background of object further;?
Can be to obtain the profile gray scale Background of object using infrared imaging mode;Object can also be shot simultaneously using camera module
And obtain the profile gray scale Background of object further.Certainly, the profile ash of object can also be obtained using alternate manner herein
Degree Background.
In the present embodiment, described object can be the parts in production line or other examined object,
Can be the vehicle of field of traffic, can be the floating thing in space etc. object.As for the current motion state of object, permissible
It is static or movement, in most application scenario, the motion state of the object of detection object is mobile.
In the present embodiment, the present embodiment adopts camera module to obtain the profile gray scale Background of object.Refer to figure
2, Fig. 2 is the schematic flow sheet of the profile gray scale Background obtaining object provided in an embodiment of the present invention.As shown in Fig. 2 this stream
Journey includes:
S21, the object image data of acquisition camera module collection;
In this step S21, camera module herein at least includes two cameras, gathers thing using two cameras
Volumetric image data, is conducive to the object three-dimensional contour outline gray-scale map obtaining with reference to subsequent step precisely to generate the three-dimensional comprehensive figure of object
Picture, real-time and precise obtains the mobile data of object, and this mobile data includes object and the distance of somewhere machine and object is current
Moving direction.Certainly, designer can voluntarily arrange multiple camera collection object image data according to operative goals,
This is not limited to two cameras.
S22, described object image data is stored into the view data of gray scale type;
In this step S22, during camera module collection object image data, by the collection view data of camera module
Form setting is output as yuv format (brightness parameter and the separately shown pixel format of colourity parameter), takes Y-component, removes U component
And V component, and Y-component is converted to RGB565 color mode, thus the object image data that camera module is gathered is changed
Become the output of gray scale types of image.
In this step S22, the view data of gray scale type herein is to have certain digit and is to meet specific pattern
View data as form.Such as, the view data of the gray scale type of the present embodiment is the BMP formatted data of eight
((Bitmap-File figure).Using the BMP formatted data of eight, the operand of processor can be reduced and reduce internal memory, from
And saved image operation process time, therefore, the method real-time that the present embodiment provides is high.
In this step S22, the object image data of the gray scale type that camera module collects first is pre-stored in display mould
In the image register (GRAM) of block.Described object image data is stored into certain digit and it is to meet particular image format
The view data of gray scale type when, need processor to read the color value of each point from the image register of display module, read
The mode taking image register is from left to right, and from top to bottom, that is, processor stores the thing of gray scale type according to image register
The first address of the image of volumetric image data, the width of image and height and the image often byte number shared by row pixel, open up one
Individual buffering area, as cache image, then carries out point by point scanning to view picture artwork (except the top and rightmost border).Wherein,
Buffering area is initialized as 255.Now, display module display speed is 30FPS (frame)/s.Preserve and read the suitable of image register
Sequence.Now, object image data is saved as the BMP formatted data view data of 8 gray bitmaps by processor.8 gray bitmaps
Represent that bitmap is up to 256 kinds of colors.Each pixel is represented with 8, and searched with this 8 list items as color table this as
The color of element.If the first character section in bitmap is 0x1F, the color of this pixel is just looked in the 32nd list item of color table
Look for.Now, under default condition, in palette, have 256 RGB items, arrive index 255 corresponding to index 0.
S23, the view data according to described gray scale type, extract the edge wheel profile of described object;
The edge wheel of the object in the view data of gray scale type in this step S23, is extracted using edge detection algorithm
Profile.Wherein, the gray scale of the view data current pixel point of gray scale type is calculated using Luo Baici operator (Roberts operator)
Value, and this value be recorded the correspondence position of cache image.
S24, the profile gray scale Background of the described object of generation.
In this step S24, processor passes through to read the caching image data of buffering area, obtains the profile of the object of 8
Gray scale Background.
Refer to Fig. 2 a, Fig. 2 a is the signal of the profile gray scale Background generating described object provided in an embodiment of the present invention
Figure.As shown in Figure 2 a, camera module 2a1 gathers the view data of former gesture 2a2, through processing, obtains the profile of former gesture
Gray scale Background 2a3.
Using aforesaid way, object image data is stored into the view data of the gray scale type of 8, it can reduce figure
As the operand processing.For the image that existing object imaging method directly processes camera collection, it is it can be avoided that locate
The reason macrooperation amount brought of image and make the problem of image objects system delay, and which solve and existing process what image brought
Macrooperation amount and be not suitable for flush bonding processor process problem.
In the present embodiment, the processor of the present embodiment reads the color of each point from the image register of display module
Value, the mode reading image register is from left to right, from top to bottom.In the process, the color value of each point can both be obtained, also
Each point coordinate value on the display screen can be obtained.For subsequently the three-dimensional comprehensive image of the object of generation being shown in display mould
Block makes a move handling process less.
Refer to Fig. 3, Fig. 3 is that the flow process of the three-D profile gray-scale map obtaining object provided in an embodiment of the present invention is illustrated
Figure.As shown in figure 3, this flow process includes:
S31, the dot chart of acquisition object;
In this step S31, the profile of real-time capture movement or stationary object, draw the dot matrix with regard to body outline
Figure.
Refer to Fig. 4, Fig. 4 is the schematic flow sheet of the dot chart obtaining object provided in an embodiment of the present invention.As Fig. 4 institute
Show, this flow process includes:
S41, the contour of object information of acquisition ultrasonic array collection;
In this step S41, ultrasonic array includes N number of ultrasonic wave module, and is combined into one by N number of ultrasonic wave module
Individual array, herein, N=NX*NY.Wherein, NXFor the ultrasonic wave module number of every row, NYFor the ultrasonic wave module number of each column, N,
NXAnd NYIt is positive integer;It is in the N in every rowXIndividual ultrasonic wave module forms a collecting window;Wherein, each collecting window exists
Height on column direction is equal, is equal in the length of line direction;It is in the N of each described collecting windowXIndividual ultrasonic wave
Module in a column direction, and has the first level altitude value of different rises according to row order;Wherein, each is first solid
It is equal for determining the distance between height value value;
In the present embodiment, alternatively, N herein is 15, NXFor 3, NYFor 5, and in ultrasonic array, according to from
Order from left to right and from top to bottom, each ultrasonic wave module that name is located in ultrasonic array successively, such as from the first surpassing
Sound wave module names the 15th ultrasonic wave module.Herein, in ultrasonic array, the quantity of ultrasonic wave module is not limited to 15
Individual, according to the quantity of operative goals designed, designed ultrasonic wave, here is not limited to 15 ultrasonic wave module to designer.
In the present embodiment, collecting window is the super of each ultrasonic wave module composition in often the going of collection contour of object information
Acoustic signals emitter window.Herein, the often row in ultrasonic array can form different acquisition window, and is come with the concept of level
Distinguish each different collecting window.Such as, when N is 15, refer to Fig. 4 a, Fig. 4 a is that the embodiment of the present invention provides display screen
Display window schematic diagram.As shown in fig. 4 a, the first ultrasonic wave module forms ground floor collecting window to the 3rd ultrasonic wave module
4a1, the 4th ultrasonic wave module forms second layer collecting window 4a2 to the 6th ultrasonic wave module, and the 7th ultrasonic wave module to the nine surpasses
Sound wave module forms third layer collecting window 4a3, and the tenth ultrasonic wave module to the 12nd ultrasonic wave module forms the 4th layer of collecting window
4a4, the 13rd ultrasonic wave module forms layer 5 collecting window 4a5 to the 15th ultrasonic wave module.Every layer of ultrasonic wave module
Height on column direction is equal, is equal in the length of line direction.
Refer again to Fig. 4 a, the present embodiment is height and the width of the display window according to display screen, to open up main window
Mouth 401.Main window herein is made up of the collecting window stacking of above-mentioned five.Wherein, shown here as the display window of screen
Height and width are all known.Main window 401 height in the Y-axis direction is equal to the height of the display window of display screen,
Width in the X-axis direction is equal to the width of the display window of display screen.
In the present embodiment, when N is 15, height in the Y-axis direction of every layer of collecting window in main window 401 etc.
In 1/5th of the display window of display screen, herein, every layer of collecting window height in the Y-axis direction is 30.Positioned at main window
The width in the X-axis direction of every layer of collecting window in mouth 401 is equal to the width of the display window of display screen, is also equal to every simultaneously
Maximum range value between ultrasonic wave module that individual ultrasonic wave module detects and object.
It is in the N of each collecting windowXIndividual ultrasonic wave module in a column direction, and has different steps height according to row order
First level altitude value of degree;Wherein, the distance between each first level altitude value value is equal.In the present embodiment,
NXEqual to 3.Refer to Fig. 4 b, Fig. 4 b is the schematic diagram of the collecting window of ultrasonic array composition provided in an embodiment of the present invention.As
Shown in Fig. 4 b, in ground floor collecting window 4a1, the first level altitude value of the first ultrasonic wave module is 10, the second ultrasonic wave mould
First level altitude value of block is 20, and the first level altitude value of the 3rd ultrasonic wave module is 30, the like five surpass to the 10th
First level altitude value of sound wave module is 150.
In the present embodiment, the coordinate value in the corresponding Y-axis of the first level altitude value of each ultrasonic wave module.
S42, obtain maximum range value between the ultrasonic wave module that each ultrasonic wave module detects and object;
Maximum range value in this step S42, between ultrasonic wave module that each ultrasonic wave module detects and object
Coordinate value in corresponding X-axis.
S43, association and the described first level altitude value of storage and described maximum range value, obtain coordinate value;
In this step S43, according to the transposed matrix formula principle in linear algebra, a newly-built lattice data structure
Body, association and the described first level altitude value of storage and described maximum range value, obtain in the coordinate that will show in display screen
Value.
S44, the described coordinate value of reading, draw point function by screen, on corresponding coordinate on the coordinate system of display screen
Draw described coordinate value;
In this step S44, processor passes through to read the coordinate value storing, and draws point function by screen, in display screen
This point is drawn on corresponding coordinate, in a cycle, and so on completes picture point become battle array to arrange, then processor again should
Draw point conversion and be counted as the corresponding X-axis of display module and Y-axis pixel point data, in screen display.
Refer to Fig. 4 c, Fig. 4 c is the outline line composition that ultrasonic array provided in an embodiment of the present invention catches mobile object
Schematic diagram.As illustrated in fig. 4 c, in different collecting window, ultrasonic array 4c1 catches different piece to a complete gesture
Outline line composition.Such as, the gesture outline line composition as second layer collecting window 4a2 to the 4th layer of collecting window 4a4 is different
's.Such as, the gesture of the ultrasonic wave module collection in the second collecting window 4a2 is that part nameless, in the 3rd collecting window
The gesture of the ultrasonic wave module collection in 4a3 and the 4th collecting window 4a4 is below the thumb overwhelming majority, by second layer collecting window
The gesture outline line patterning composition of 4a2 to the 4th layer of collecting window 4a4 together, forms a complete gesture dot chart 4c2.
S45, pass through screen setting-out function, connect display screen on each coordinate points, generate object dot chart.
Refer again to Fig. 4 c, by screen setting-out function, generate gesture dot chart 4c2.In gesture dot chart 4c2, can
To be seen that there is three same gestures of different distance.
In the present embodiment, the object dot chart obtaining also will be through refining treatment, specifically, to obtaining with regard to object
The view data of dot chart carries out eliminating noise, parameter field conversion and normal vector calculating process, connects each point and produces object
Dot chart.Object dot chart now has certain geometry, pixel and other attribute.According to design needs, by it
Property and feature carry out simplifying analyzing and processing, reduce data volume, to reduce image operation amount, improve real-time.
In the present embodiment, will be obtained by above-mentioned steps, and there is certain geometry, pixel and other
The dot chart of attribute is scanned converter technique and processes, and obtains the dot chart of final object.
S32, the iso-surface patch described dot chart being carried out in medical image three-dimensional drafting are processed;
S33, the three-D profile gray-scale map of the described object of generation.
In conjunction with step S32 and S33, the iso-surface patch (Marching cubes) during medical image three-dimensional is drawn processes number one by one
According to the cube (voxel) in field, sort out the cube intersecting with contour surface, contour surface and cube are gone out using interpolation calculation
The intersection point of seamed edge.Thus, in each unit in algorithm, two of iso-surface extracting are mainly calculated as being approached by triangular plate in voxel
Contour surface calculate and each vertex normal of triangular plate calculate.Calculate the gradient of each corner point of voxel, Ran Houzai with centered difference
The normal direction of each apex of tri patch is once obtained by the linear interpolation of gradient at two end points of voxel seamed edge, thus realize etc.
The drafting in value face.
S12, the profile gray scale Background mating described object and three-D profile gray-scale map, generate the three-dimensional comprehensive figure of object
Picture.
In this step S12, profile gray scale Background to described object and three-D profile gray-scale map are normalized long-pending
Gray-scale Matching is processed.Under having illuminance environment, profile gray scale Background is effective, processor by three-D profile gray-scale map with
Profile gray scale Background matches.Verified by coupling, allow the result that the method detects more precisely and fast imaging more
Plus it is three-dimensional.
In this step S12, due to being less than under 0 illumination environment at night etc., the profile of the object of camera module collection
Gray scale Background is blank it is impossible to calibrate for making coupling, the contour of object information that therefore ultrasonic array collects every time
Gray scale will reduce 10%, calculate 5 automatic clears and simultaneously reload.Now, re-establish bitmap data structure body, record
The XY coordinate value of data bitmap and gray scale values every time, by transferring data and the bitmap data structure of lattice data structure body
Volume data, generates triangulation piece according to XY coordinate value, through computing matching treatment, calculates distance and the movement side of mobile object
To.In a cycle, deep mixed dot matrix gray-scale map superposition is shown in the display screen of display module up, people
Body visually just it is observed that mobile object trend in the picture, along with ultrasound distance value out is in display mould
The display screen of block is released in sidenote, all figures is merged and generates three-dimensional comprehensive image.Therefore, under the low-light (level) environment such as night,
Direction and the distance of object movement can be captured.
Refer to Fig. 4 d, Fig. 4 d is the schematic diagram of the image objects of the embodiment of the present invention.As shown in figure 4d, left side is right side
Mobile object image, right side is the mobile figure of mobile object.With it, its to the mode of image objects such as human eye
The mode obtaining image is the same, on the one hand, the method can be with the overall structure profile diagram of imaging object, and on the other hand, it is permissible
Characterize distance and the moving direction of object.
Using method provided in an embodiment of the present invention, it can accurately detect picture and the distance of object movement,
Thus improving night recognition detection rate of object and make robot according to the three-dimensional comprehensive figure of high-precision object on streamline
As flexibly hiding obstacle.
Using method provided in an embodiment of the present invention, due to adopting three-D profile imaging under the low-light (level) environment such as night
Working method, therefore, still can effectively work under the low-light (level) environment such as night, be widely used in application scenes.
Such as afield, soldier can be found that direction and the distance of invader's movement under pitch-dark environment.
Using method provided in an embodiment of the present invention, due to being found range dot matrix imaging technique using ultrasonic array, therefore, exist
Still can effectively work under the bad weather circumstances such as thick fog, fireman can find target with the environment of dense smoke.
Using method provided in an embodiment of the present invention, due to being found range using real-time and calculating mobile object distance in real time
Technology, user according to the moving direction of object and speed, can calculate size and arrival time and the distance of object.More preferably, exist
On space station, for capturing or reclaiming too lighting for aerial floating objects or space trash.
Using method provided in an embodiment of the present invention, due to the imaging viewing field of camera imaging visual field and dot matrix image device
Unanimously, the target covering for imaging viewing field all can effectively be found range, so, the range finding of mobile object in achievable full width face.Cause
This, using the method, it can pick out the part of particular size on streamline.
Fig. 5 is that the embodiment of the present invention provides a kind of image objects device.As shown in figure 5, this device includes:
Camera module 51, for shooting object, and produces object image data;
Ultrasonic array 52, for gathering contour of object information;
Microprocessor 53, for according to described object image data and described contour of object information, obtaining the profile of object
Gray scale Background and three-D profile gray-scale map, and mate profile gray scale Background and the three-D profile gray-scale map of described object,
Generate the three-dimensional comprehensive image of object.
Further, described microprocessor specifically for:Described object image data is stored into the picture number of gray scale type
According to according to the view data of described gray scale type, the edge wheel profile of the described object of extraction, the profile gray scale of the described object of generation
Background.
Further, described microprocessor specifically for:Obtain the dot chart of object, medical image is carried out to described dot chart
Iso-surface patch in 3 D rendering is processed, and generates the three-D profile gray-scale map of described object.
Further, described microprocessor specifically for:
Obtain the contour of object information of ultrasonic array collection;Described ultrasonic array includes N number of ultrasonic wave module, N=
NX*NY;Wherein, NXFor the ultrasonic wave module number of every row, NYFor the ultrasonic wave module number of each column, N, NXAnd NYIt is just whole
Number;It is in the N in every rowXIndividual ultrasonic wave module forms a collecting window;Wherein, each collecting window height in a column direction is
Equal, it is equal in the length of line direction;It is in the N of each described collecting windowXIndividual ultrasonic wave module in a column direction, and
And there is the first level altitude value of different rises according to row order;Wherein, between each first level altitude value away from
It is equal from value;
Obtain the maximum range value between the ultrasonic wave module that each ultrasonic wave module detects and object;
Association and the described first level altitude value of storage and described maximum range value, obtain coordinate value;
Read described coordinate value, point function is drawn by screen, the coordinate system of display screen draws on corresponding coordinate
Described coordinate value;
By screen setting-out function, connect each coordinate points on display screen, generate the dot chart of object.
Further, described microprocessor specifically for:Profile gray scale Background to described object and three-D profile gray scale
Figure is normalized dust stratification degree matching treatment.
In the present embodiment, microprocessor can also be general processor, digital signal processor (DSP), special integrated
Circuit (ASIC), field programmable gate array (FPGA) or other PLD, discrete gate or transistor logic, discrete
Nextport hardware component NextPort or these parts any combinations.Further, microprocessor herein can be any conventional processors, control
Device, microcontroller or state machine.Processor can also be implemented as the combination of computing device, for example, the group of DSP and microprocessor
Conjunction, multi-microprocessor, one or more microprocessors combine DSP core or any other this configuration.
The image objects device of the present embodiment can accurately detect picture and the distance of object movement, thus improving
Night recognition detection rate of object and make robot flexibly be hided according to the three-dimensional comprehensive image of high-precision object on streamline
Obstacle avoidance.
Fig. 6 is a kind of circuit structure theory diagram of robot provided in an embodiment of the present invention.As shown in fig. 6, this machine
People includes:
Camera module 61, for shooting object, and produces object image data;
Ultrasonic array 62, for gathering contour of object information;
Microprocessor 63, for according to described object image data and described contour of object information, obtaining the profile of object
Gray scale Background and three-D profile gray-scale map, and mate profile gray scale Background and the three-D profile gray-scale map of described object,
Generate the three-dimensional comprehensive image of object;
Steering control device 64, for the control instruction according to microprocessor, controls described camera module and/or described
The steering of ultrasonic array;
Sensor assembly 65, for gathering the current pose of described robot, exports attitude data;Described microprocessor root
Export described control instruction according to described attitude data.
Wireless communication module 66, for being uploaded to intelligent terminal and/or server by the data of described microprocessor;
Display apparatus module 67, for showing the three-dimensional comprehensive image of the object of described microprocessor generation;
Memory 68, for storing video and/or image data.Specifically, microprocessor 63 reads image register
View data, and this view data is stored into JPG picture format, then JPG picture format synthesis AVI video format is deposited
Storage.
Alternatively, the camera module 61 of the present embodiment adopts the first camera 611 and second camera 612 to shoot object
And produce object image data.
Preferably, in order to reduce the computing load of microprocessor 63, the microprocessor 63 of the present embodiment includes the first monolithic
Machine 631 and second singlechip 632, the first single-chip microcomputer 631 is used for processing the object image data of camera module 61 generation, and second
Single-chip microcomputer 632 is used for the contour of object information of ultrasonic array 62 collection.Further, the first single-chip microcomputer 631 is additionally operable to basis
Described object image data and described contour of object information, obtain profile gray scale Background and the three-D profile gray-scale map of object,
And mate profile gray scale Background and the three-D profile gray-scale map of described object, generate the three-dimensional comprehensive image of object.
In the present embodiment, sensor assembly 65 is acceleration of gravity gyro sensor (MPU-6050), this sensor die
Block 65 can detect the attitude of current robot, exports attitude data.
Refer to Fig. 7, Fig. 7 is a kind of structural representation of robot provided in an embodiment of the present invention.As shown in fig. 7, it is super
Acoustic wave array 62 is arranged at the top of steering control device 64, and camera module 61 is arranged at the top of ultrasonic array 62, shows
Show that device module 67 is arranged at ultrasonic array 62 back side.In the present embodiment, steering control device 64 includes pitching transfer
Module 641, left and right turn apparatus module 642 and transfer module 643 in front and back.Wherein, each transfer module is all
Rotated by the driving of stepper motor.Pitch the control instruction that transfer module 641 receives the first single-chip microcomputer 631, adjustment
Flexion-extension posture, thus adjusting the attitude of robot, make robot continue next time subject image collection.In the same manner, left/right rotation
To apparatus module 642 and in front and back transfer module 643 working method with reference to pitch transfer module 641 work side
Formula.It is not discussed in detail here.
In the present embodiment, wireless communication module 66 is 2.4G communication module, and data can be uploaded to intelligent terminal by it
And/or server.Intelligent terminal herein include smart mobile phone, intelligent portable handheld device, desktop computer, panel computer with
And other terminal device.Server herein can be home server or cloud server.
The robot of the present embodiment can accurately detect picture and the distance of object movement, thus improve night existing
The recognition detection rate of object and make robot flexibly hide obstacle according to the three-dimensional comprehensive image of high-precision object on streamline.
Professional should further appreciate that, each example describing in conjunction with the embodiments described herein
Unit and algorithm steps, can be hard in order to clearly demonstrate with electronic hardware, computer software or the two be implemented in combination in
Part and the interchangeability of software, generally describe composition and the step of each example in the above description according to function.
These functions to be executed with hardware or software mode actually, the application-specific depending on technical scheme and design constraint.
Professional and technical personnel can use different methods to each specific application realize described function, but this realization
It is not considered that it is beyond the scope of this invention.Described computer software can be stored in computer read/write memory medium, this journey
Sequence is upon execution, it may include as the flow process of the embodiment of above-mentioned each method.Wherein, described storage medium can for magnetic disc, CD,
Read-only memory or random access memory etc..
In each embodiment above-mentioned, as long as involved technical characteristic in each embodiment of the described present invention
The conflict of not constituting each other just can be mutually combined.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all essences in the present invention
Any modification, equivalent and improvement made within god and principle etc., should be included within the scope of the present invention.
Claims (5)
1. a kind of object imaging method is it is characterised in that comprise the following steps:
Obtain profile gray scale Background and the three-D profile gray-scale map of object;
Mate profile gray scale Background and the three-D profile gray-scale map of described object, generate the three-dimensional comprehensive image of object.
2. method according to claim 1, it is characterised in that the profile gray scale Background of described acquisition object, is specifically wrapped
Include:
Obtain the object image data of camera module collection;
Described object image data is stored into the view data of gray scale type;
According to the view data of described gray scale type, extract the edge wheel profile of described object;
Generate the profile gray scale Background of described object.
3. method according to claim 1 and 2 is it is characterised in that the three-D profile gray-scale map of described acquisition object, specifically
Including:
Obtain the dot chart of object;
The iso-surface patch described dot chart being carried out in medical image three-dimensional drafting is processed;
Generate the three-D profile gray-scale map of described object.
4. method according to claim 3, it is characterised in that the dot chart of described acquisition object, specifically includes:
Obtain the contour of object information of ultrasonic array collection;Wherein, described ultrasonic array includes N number of ultrasonic wave module, N=
NX*NY;Wherein, NXFor the ultrasonic wave module number of every row, NYFor the ultrasonic wave module number of each column, N, NXAnd NYIt is just whole
Number;It is in the N in every rowXIndividual ultrasonic wave module forms a collecting window;Wherein, each collecting window height in a column direction is
Equal, it is equal in the length of line direction;It is in the N of each described collecting windowXIndividual ultrasonic wave module in a column direction, and
And there is the first level altitude value of different rises according to row order;Wherein, between each first level altitude value away from
It is equal from value;
Obtain the maximum range value between the ultrasonic wave module that each ultrasonic wave module detects and object;
Association and the described first level altitude value of storage and described maximum range value, obtain coordinate value;
Read described coordinate value, point function is drawn by screen, corresponding coordinate on the coordinate system of display screen draws described
Coordinate value;
By screen setting-out function, connect each coordinate points on display screen, generate the dot chart of object.
5. method according to claim 1 is it is characterised in that the profile gray scale Background and three of the described object of described coupling
Dimension profile gray-scale map, specifically includes:
Dust stratification degree matching treatment is normalized to the profile gray scale Background and three-D profile gray-scale map of described object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610298284.6A CN106408633A (en) | 2016-05-09 | 2016-05-09 | Object imaging method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610298284.6A CN106408633A (en) | 2016-05-09 | 2016-05-09 | Object imaging method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106408633A true CN106408633A (en) | 2017-02-15 |
Family
ID=58005496
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610298284.6A Pending CN106408633A (en) | 2016-05-09 | 2016-05-09 | Object imaging method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106408633A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110999267A (en) * | 2017-08-07 | 2020-04-10 | 苹果公司 | Portable electronic device |
CN111524151A (en) * | 2020-04-29 | 2020-08-11 | 深圳市铭特科技有限公司 | Object detection method and system based on background recognition |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104408616A (en) * | 2014-11-25 | 2015-03-11 | 苏州福丰科技有限公司 | Supermarket prepayment method based on three-dimensional face recognition |
JP2016025420A (en) * | 2014-07-17 | 2016-02-08 | キヤノン株式会社 | Image processing system, image processing method, and program |
-
2016
- 2016-05-09 CN CN201610298284.6A patent/CN106408633A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016025420A (en) * | 2014-07-17 | 2016-02-08 | キヤノン株式会社 | Image processing system, image processing method, and program |
CN104408616A (en) * | 2014-11-25 | 2015-03-11 | 苏州福丰科技有限公司 | Supermarket prepayment method based on three-dimensional face recognition |
Non-Patent Citations (1)
Title |
---|
裴明涛等: "基于特征轮廓的灰度图像定位三维物体方法", 《计算机研究与发展》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110999267A (en) * | 2017-08-07 | 2020-04-10 | 苹果公司 | Portable electronic device |
US11662772B2 (en) | 2017-08-07 | 2023-05-30 | Apple Inc. | Portable electronic device |
CN111524151A (en) * | 2020-04-29 | 2020-08-11 | 深圳市铭特科技有限公司 | Object detection method and system based on background recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN206021358U (en) | A kind of image objects device and its robot | |
CN108564616B (en) | Fast robust RGB-D indoor three-dimensional scene reconstruction method | |
CN110264567B (en) | Real-time three-dimensional modeling method based on mark points | |
CN112150575B (en) | Scene data acquisition method, model training method and device and computer equipment | |
Huitl et al. | TUMindoor: An extensive image and point cloud dataset for visual indoor localization and mapping | |
JP6560480B2 (en) | Image processing system, image processing method, and program | |
CN108154550B (en) | RGBD camera-based real-time three-dimensional face reconstruction method | |
CN104463108B (en) | A kind of monocular real time target recognitio and pose measuring method | |
CN103886107B (en) | Robot localization and map structuring system based on ceiling image information | |
JP4473754B2 (en) | Virtual fitting device | |
JP2003514298A (en) | How to capture motion capture data | |
US20110292036A1 (en) | Depth sensor with application interface | |
US20230419438A1 (en) | Extraction of standardized images from a single-view or multi-view capture | |
CN104537705B (en) | Mobile platform three dimensional biological molecular display system and method based on augmented reality | |
WO2022062238A1 (en) | Football detection method and apparatus, and computer-readable storage medium and robot | |
CN109255749A (en) | From the map structuring optimization in non-autonomous platform of advocating peace | |
JP4395689B2 (en) | Image data processing method and modeling apparatus | |
CN104794737A (en) | Depth-information-aided particle filter tracking method | |
CN111462503A (en) | Vehicle speed measuring method and device and computer readable storage medium | |
CN112861808B (en) | Dynamic gesture recognition method, device, computer equipment and readable storage medium | |
CN114022560A (en) | Calibration method and related device and equipment | |
CN110544278B (en) | Rigid body motion capture method and device and AGV pose capture system | |
CN114298946B (en) | Deep learning point cloud completion method for enhancing frame details | |
CN106408633A (en) | Object imaging method | |
CN110021035B (en) | Marker of Kinect depth camera and virtual marker tracking method based on marker |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170215 |