CN110070606B - Space rendering method, target detection method, detection device, and storage medium - Google Patents

Space rendering method, target detection method, detection device, and storage medium Download PDF

Info

Publication number
CN110070606B
CN110070606B CN201910258814.8A CN201910258814A CN110070606B CN 110070606 B CN110070606 B CN 110070606B CN 201910258814 A CN201910258814 A CN 201910258814A CN 110070606 B CN110070606 B CN 110070606B
Authority
CN
China
Prior art keywords
space
effective
plane
spatial
line segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910258814.8A
Other languages
Chinese (zh)
Other versions
CN110070606A (en
Inventor
魏乃科
冯复标
吴良健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201910258814.8A priority Critical patent/CN110070606B/en
Publication of CN110070606A publication Critical patent/CN110070606A/en
Application granted granted Critical
Publication of CN110070606B publication Critical patent/CN110070606B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The application provides a space drawing method, a target detection method, a detection device and a storage medium. The space rendering method comprises the following steps: acquiring image information and acquiring space depth information corresponding to the image information; setting a preset plane according to the image information and the spatial depth information, and forming a first effective rule space by the preset plane according to a preset rule; and drawing at least one directed line segment in the first effective rule space according to the spatial depth information, and generating a second effective rule space according to the at least one directed line segment and the first effective rule space. The space drawing method can draw a three-dimensional effective space, and can remove the influence of pixel points outside the effective space in space detection.

Description

Space rendering method, target detection method, detection device, and storage medium
Technical Field
The present application relates to the field of video intelligent analysis technologies, and in particular, to a space rendering method, a target detection method, a detection apparatus, and a storage medium.
Background
The current common regular drawing mode in the video monitoring field adopts a two-dimensional drawing mode, and the common drawing mode has a closed polygon mode or a polyline mode. These regular rendering methods can be applied to a two-dimensional plane, but cannot be applied to a three-dimensional space.
Further, the conventional regular area drawing method for a two-dimensional plane generally adopts a drawing mode of a polygon or a polyline. Both of these ways consist of intersecting line segments, which are valid for rendering pixel regions, but not valid for rendering spatial regions. Therefore, the video surveillance field needs to refer to a new spatial rule generation mode.
Disclosure of Invention
The application provides a space drawing method, a target detection method, a detection device and a storage medium, and mainly solves the technical problem of how to draw a three-dimensional effective space.
In order to solve the above technical problem, the present application provides a space rendering method, where the space rendering method includes:
acquiring image information and acquiring space depth information corresponding to the image information;
setting a preset plane according to the image information and the spatial depth information, and forming a first effective rule space by the preset plane according to a preset rule;
and drawing at least one directed line segment in the first effective rule space according to the space depth information, and generating a second effective rule space according to the at least one directed line segment and the first effective rule space.
In order to solve the above technical problem, the present application further provides a target detection method, where the target detection method includes:
acquiring image information and an effective rule space corresponding to the image information, wherein the effective rule space is a second effective rule space in the space drawing method;
acquiring at least one segmentation plane of the effective rule space, acquiring a target pixel point in the image information, and acquiring three-dimensional information of the pixel point according to the spatial depth information corresponding to the image information;
setting a ray by taking the pixel point as a starting point, and calculating the number of planes of the ray passing through the segmentation plane;
if the number is an odd number, determining that the target is in the effective rule space; if the number is an even number, determining that the target is outside the effective rule space;
wherein at least one of the division planes is perpendicular to a predetermined plane of the effective rule space.
In order to solve the above technical problem, the present application further provides a detection apparatus, which includes a memory and a processor, wherein the memory is coupled to the processor;
wherein the memory is adapted to store program data and the processor is adapted to execute the program data to implement the spatial rendering method and/or the object detection method as described above.
To solve the above technical problem, the present application further provides a computer storage medium for storing program data, which when executed by a processor, is used to implement the space rendering method and/or the object detection method as described above.
Compared with the prior art, the beneficial effect of this application is: the detection device acquires image information and spatial depth information corresponding to the image information; further establishing a three-dimensional coordinate system according to the image information and the spatial depth information, and acquiring three-dimensional coordinates of all pixel points in the image information in the three-dimensional coordinate system; setting a preset plane in the three-dimensional coordinate system, and setting a first effective rule space according to a preset rule on the basis of the preset plane; at least one directed line segment is drawn in the first effective regular space, and a space defined by a plane passing through the directed line segment and a preset plane is used as a second effective regular space, so that a three-dimensional effective space is drawn, points outside the three-dimensional effective space can be used as interference points, and the influence of pixel points outside the three-dimensional effective space is effectively removed during space detection.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts. Wherein:
FIG. 1 is a schematic illustration of a surveillance image of a video surveillance in the prior art;
FIG. 2 is a schematic flow chart of a first embodiment of the spatial rendering method of the present application;
FIG. 3 is a schematic illustration of a surveillance image acquired in an embodiment of the spatial rendering method of FIG. 2;
FIG. 4 is a schematic diagram of one embodiment of directed line segments in the embodiment of the spatial rendering method of FIG. 2;
FIG. 5 is a schematic view of the directed line segment of FIG. 4 in a camera coordinate system;
FIG. 6 is a schematic diagram of another embodiment of directed line segments in the embodiment of the spatial rendering method of FIG. 2;
FIG. 7 is a schematic diagram of the directional line segment of FIG. 6 in a camera coordinate system;
FIG. 8 is a schematic diagram of yet another embodiment of directed line segments in the embodiment of the spatial rendering method of FIG. 2;
FIG. 9 is a schematic diagram of the directed line segments of FIG. 8 in a camera coordinate system;
FIG. 10 is a schematic flow chart diagram of a second embodiment of a spatial rendering method of the present application;
FIG. 11 is a schematic diagram of one embodiment of directed line segments in the embodiment of the spatial rendering method of FIG. 10;
FIG. 12 is a schematic diagram of another embodiment of directed line segments in the embodiment of the spatial rendering method of FIG. 10;
FIG. 13 is a schematic flow chart diagram of a first embodiment of the object detection method of the present application;
FIG. 14 is a schematic projection of the effective rule space of FIG. 13 in the XY plane;
FIG. 15 is a schematic flow chart diagram of a second embodiment of the object detection method of the present application;
FIG. 16 is a schematic structural diagram of an embodiment of the detection apparatus of the present application;
FIG. 17 is a schematic structural diagram of an embodiment of a computer storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Video behavior analysis generally forms a tracking ID for a target through target detection and target tracking, and then analyzes video behavior according to the state of the tracking ID. However, if glass is present, the problem of glass shadows will occur, and targets will also be detected and tracked on the other side of the glass.
Specifically, as shown in fig. 1, there is only one pedestrian in the video-monitored picture, but due to the presence of the glass, the other side of the glass shows the shadow of the pedestrian. In the video behavior analysis process, the detection device detects the head of the person on one side of the glass and the head of the shadow on the other side of the glass, two behaviors are mistakenly considered in the video monitoring, so that a great error can be caused in pedestrian analysis, and even the detection device can be caused to generate a wrong alarm that a person approaches.
In order to solve the above problem, the spatial rendering method of the present application introduces stereoscopic depth information. By the depth information, spatial coordinate data of the detection target can be acquired. Further, the spatial rendering method may also render valid spatial regions, i.e., spatial rules, in the image. By judging whether the target coordinate falls in the effective space area or not, the interference item outside the effective space area can be effectively removed.
Specifically, the present embodiment provides a space rendering method for rendering a three-dimensional effective space, and the space rendering method of the present embodiment may be used to rapidly render an effective three-dimensional space, so as to reduce the influence of interference points outside the three-dimensional effective space.
Referring to fig. 2 in particular, fig. 2 is a schematic flow chart diagram of a first embodiment of a space rendering method of the present application.
As shown in fig. 2, the space rendering method of the present embodiment specifically includes the following steps:
s101: and acquiring image information and acquiring spatial depth information corresponding to the image information.
The detection device can establish communication connection with an external camera device, and the camera device can be installed in places needing video monitoring, such as public activities and gathering places like banks, hospitals and schools. The camera device collects and records information of a video monitoring place in real time, and the collected information is used as a monitoring image or is synthesized into the monitoring image and is sent to the detection device.
In addition, the detection device can also be accessed to an external storage device, and the external storage device can be a mobile hard disk, a floppy disk drive, a U disk or an optical disk drive and the like; the monitoring image is stored in the external storage device, and the detection device can directly acquire the monitoring image from the external storage device. Referring to fig. 3, fig. 3 is a monitoring image obtained in the embodiment of the spatial rendering method shown in fig. 2.
Further, the detection device acquires corresponding spatial depth information according to the acquired image information of the monitoring image. Specifically, the detection device may obtain the spatial depth information according to technical means such as binocular stereo profile modeling, structured light three-dimensional modeling, or flight time three-dimensional modeling.
The detection device establishes a camera coordinate system according to the image information and the corresponding spatial depth information, wherein the camera coordinate system comprises an X axis, a Y axis and a Z axis, for example, the height of the image pickup device and the ground is obtained according to the spatial depth information, and h is the maximum difference value of the spatial depth information corresponding to the image information. In the camera coordinate system, the coordinates of the imaging device are (0, h), and a point at a distance h from the imaging device in the Z-axis direction is taken as the origin of coordinates (0, 0). Therefore, the detection device converts all pixel points in the image information into three-dimensional coordinate points in a camera coordinate system according to the spatial depth information.
S102: and setting a preset plane according to the image information and the spatial depth information, and forming a first effective rule space by the preset plane according to a preset rule.
After the camera coordinate system is set, the detection device sets a certain plane in the camera coordinate system as a preset plane. For example, in the above-described camera coordinate system, an XY plane, i.e., a plane where Y =0 is set as the preset plane, wherein the XY plane is the ground in the monitored image of fig. 3.
The detection device sets a corresponding preset normal line in the upward direction perpendicular to the preset plane, and takes the space of the preset plane extending towards the direction of the preset normal line as a first effective rule space. The first effective rule space can provide a basic space for the subsequent formation of the second effective rule space.
For example, in the camera coordinate system, since the Z axis is perpendicular to the XY plane, the detection apparatus may take the ray on which the Z axis is located as the normal to the XY plane. The detection device may use a space in which the XY plane extends in the Z-axis direction, that is, a space in which Z >0, as the first effective rule space.
In other embodiments, the detection device may further use other rays perpendicular to the predetermined plane and conforming to the right-handed spiral rule as the predetermined normal of the predetermined plane, which is not described herein again.
S103: and drawing at least one directed line segment in the first effective rule space according to the spatial depth information, and generating a second effective rule space according to the at least one directed line segment and the first effective rule space.
After the first effective rule space is drawn, the detection device further draws at least one directed line segment in the first effective rule space. As shown in fig. 4 and 5, the start point and the end point of the directional line segment 11 drawn in fig. 4 both fall on the wall surface of the monitor image. The position of the directional line segment 11 in fig. 4 in the camera coordinate system is shown in fig. 5. From the directional line segment 11 in fig. 5, a segmentation plane 12 can be generated, wherein the generated segmentation plane needs to satisfy the following two conditions:
1. the directed line segments are within the corresponding generated segmentation plane.
2. Each of the division planes intersects with a predetermined plane.
Further, the detection device may also generate a division plane that perpendicularly intersects the preset plane.
Since both end points of the directed line segment 11 fall on the wall surface, and the wall surface is known to be perpendicular to the ground, the detection device takes the plane where the wall surface is located as the division plane 12. The effective rule space formed by the dividing plane 12 can be determined by the logical relationship between the directional line segment 11 and the normal direction of the dividing plane 12. Specifically, the normal direction of the division plane 12 is determined based on the direction of the directional line segment 11 and the right-hand spiral rule, and the space in which the division plane 12 extends in the normal direction is used as the effective rule space.
In this way, if the left wall surface in the monitor image in fig. 4 is defined as YZ plane in the camera coordinate system in fig. 5, the effective rule space formed by the division plane 12 is X >0. At the same time, the effective rule space intersects with the first effective rule space with Z >0 to form a second effective rule space with (X >0, Z > 0).
Further, if the starting point and/or the ending point of the directional line segment are random, as shown in fig. 6, the starting point of the directional line segment 21 falls on the wall surface of the monitored image, and the ending point falls on the non-wall surface area of the monitored image. From the directed line segment 21 of fig. 6, the detection means generates a plane in which a dashed line frame is located in the camera coordinate system of fig. 7 as a division plane 22.
Further, if the detection device draws a plurality of directed line segments in the first effective rule space, each directed line segment may generate a segmentation plane. In the monitoring image shown in fig. 8, the detection apparatus draws four directional line segments 31 that are not connected to each other. Referring to fig. 9, the position of each directional line segment 31 in the camera coordinate system generates a segmentation plane 32 perpendicular to the preset plane 33 for each directional line segment 31, and a second effective rule space 34 is formed by the preset plane 33 and the four segmentation planes 32.
Similarly, if the detection device draws a plurality of directed line segments in the monitored image, a spatial polyhedron may be formed as a second effective regular space by a plurality of corresponding segmentation planes and a preset plane, which is not described herein again.
In this embodiment, the detection device acquires image information and spatial depth information corresponding to the image information; further establishing a three-dimensional coordinate system according to the image information and the spatial depth information, and acquiring three-dimensional coordinates of all pixel points in the image information in the three-dimensional coordinate system; setting a preset plane in a three-dimensional coordinate system, and setting a first effective rule space according to a preset rule on the basis of the preset plane; drawing at least one directed line segment in the first effective regular space, taking a space defined by a plane passing through the directed line segment and a preset plane as a second effective regular space, thereby drawing a three-dimensional effective space, and taking points outside the three-dimensional effective space as interference points, thereby effectively removing the influence of pixel points outside the three-dimensional effective space; the effective regular spatial area can solve the problem that the two-dimensional pixel area cannot solve due to the perspective effect, such as the mirror shadow described above.
For step S103 in the embodiment shown in fig. 2, the present application further proposes another specific method. Referring to fig. 10, fig. 10 is a schematic flow chart diagram illustrating a second embodiment of the space rendering method of the present application.
Specifically, in step S103, the detection device may randomly generate a directed line segment in the monitored image, or draw a directed line segment in the first effective rule space according to the user input information; further, the detection device also needs to convert a two-dimensional directional line segment in the monitored image into a three-dimensional directional line segment in the camera coordinate system, so as to generate an effective segmentation plane. In order to improve the accuracy of generating the segmentation plane, the spatial rendering method of the embodiment specifically proposes the following method:
s201: and drawing at least one directed line segment on the image, and acquiring two-dimensional information and spatial depth information of a starting point and an end point.
After the first effective rule space is established, the detection device may randomly generate a directed line segment in the first effective rule space, or the detection device may draw a directed line segment in the first effective rule space according to user input information.
In particular, the user input information may comprise coordinate information of a start point and/or an end point. If the user input information simultaneously comprises the coordinate information of the starting point and the ending point, the detection device directly generates a directed line segment on the monitoring image. If the user input information only comprises the coordinate information of the starting point or the ending point, the detection device randomly generates the coordinate information of the starting point or the ending point according to the known coordinate information and a preset rule. Wherein, the preset rule may include: the direction of the directed line segment formed by connecting the starting point and the ending point is clockwise in the camera coordinate system.
After the two-dimensional information of the directional line segment in the monitored image is acquired, the detection device acquires the spatial depth information of the start point and the end point according to the technical means in step S101, which is not described herein again.
Further, the preset rule may further include: when a plurality of directed line segments are drawn, the plurality of directed line segments are separated from each other. The specific reasons are as follows: the continuous line segment has no generality, such as the monitoring image in fig. 11, and the detection device cannot accurately draw the ground area through the intersected continuous directional line segment 41 in the graph. Specifically, the intersection points of the directional line segments are blocked by other targets in the monitored image, such as a human body in fig. 11, and the detection device cannot calculate the spatial coordinates of the desired corner points, so that an accurate ground space region cannot be formed.
In order to solve the above problem, the detection device may draw a separate directed line segment, for example, in the monitoring image shown in fig. 12, two directed line segments 51 do not intersect, a division plane formed by the separate directed line segments is separated, and the extended focus may not be affected by the shielding target.
S202: and calculating a plurality of pieces of spatial depth information of the pixel points in the preset ranges of the starting point and the ending point, and acquiring the average value or the median of the plurality of pieces of spatial depth information.
The two-dimensional monitoring image is drawn with directed line segments, and the starting point and/or the end point of the directed line segments need to be converted into the starting point and/or the end point of the three-dimensional space. This conversion requires the incorporation of spatial depth information (point cloud data), and point-to-point conversion produces large errors due to the three-dimensional depth information error that accounts for the error.
Therefore, in order to improve the drawing accuracy of the directed line segment, the detection device can also calculate a plurality of pieces of spatial depth information of the pixel points in the preset ranges of the starting point and the ending point.
Specifically, the detection device may analyze spatial depth information in a preset range around a spatial point corresponding to a pixel point of the start point and/or the end point, and take an average value of the area as a coordinate value of the spatial point.
Alternatively, the detection device may analyze spatial depth information in a preset range around a spatial point corresponding to the pixel point of the start point and/or the end point, and take a median value of the region as a coordinate value of the spatial point.
S203: and determining three-dimensional information of the starting point and the ending point according to the average value or the median value and the two-dimensional information, and forming a directed line segment in the first effective regular space by the three-dimensional information of the starting point and the ending point.
After the space point coordinates of the starting point and/or the ending point are obtained, the detection device generates an effective line segment in a first effective rule space according to the space point coordinates.
Further, the detection device may further determine whether the obtained spatial point of the start point and/or the end point is within the first valid rule space; and if not, abandoning the space point, and acquiring the space point coordinates of other starting points and/or ending points again.
In the embodiment, the space drawing method adopts a mode of separating directed line segments to replace continuous intersecting broken lines, so that the situation that important corner points are shielded and cannot be drawn in a three-dimensional space can be avoided; the space drawing method also adopts an average value or median value calculation mode to convert the two-dimensional information of the starting point and/or the ending point in the monitoring image into three-dimensional information in a camera coordinate system, thereby improving the drawing accuracy of the directed line segment and the accuracy of the subsequent generation of the segmentation plane.
In the space rendering method in the foregoing embodiment, the detection device generates the second effective rule space by using a method of generating a space region rule by separating the directed line segment from the preset condition. Further, the detection device may further perform target detection and target tracking according to the generated second effective rule space, and in order to detect the target in the second effective rule space, another specific method is further provided in the present application, specifically refer to fig. 13, and fig. 13 is a flowchart of the first embodiment of the target detection method in the present application.
As shown in fig. 13, the target detection method of the present embodiment specifically includes the following steps:
s301: and acquiring the image information and an effective rule space corresponding to the image information.
The image in step S301 is the monitoring image in the above embodiment, and the effective rule space is the second effective rule space in the above embodiment, which is not described herein again.
S302: and acquiring at least one segmentation plane of the effective rule space, acquiring a pixel point of a target in the image information, and acquiring three-dimensional information of the pixel point according to the spatial depth information corresponding to the image.
The detection device determines a target to be detected in the monitored image and acquires pixel points of the target. The detection device further obtains the spatial depth information of the pixel point, and converts the two-dimensional information of the pixel point into three-dimensional information in a camera coordinate system according to the corresponding spatial depth information. Please refer to step S101 and step S202 above, which are not repeated herein for a method of obtaining spatial depth information and a method of converting two-dimensional information into three-dimensional information.
S303: and setting a ray by taking the pixel point as a starting point, calculating the number of planes of the ray passing through the segmentation plane, and judging whether the number of the planes is an odd number.
The detection device randomly sets a ray by taking the three-dimensional coordinates of the pixel points as a starting point, and judges whether the ray passes through the segmentation plane in the extension direction. If not, the target is judged to be out of the effective rule space. If yes, further calculating the number of planes passing through the splitting plane in the extending direction of the ray, and judging whether the number of the planes is an odd number. If yes, go to step S304; if not, the process proceeds to step S305.
S304: the target is determined to be within the valid rule space.
S305: and judging that the target is outside the effective rule space.
Further, since the position of the effective rule space in the camera coordinate system is already determined, that is, the coordinate range satisfying the effective rule space is already determined, the target detection method of the embodiment may further determine whether the three-dimensional coordinate of the target is within the coordinate range of the effective rule space, and if so, determine that the target is within the effective rule space.
Further, when all the segmentation planes of the effective rule space are perpendicular to the preset plane, the effective rule space can be projected on the XY plane, and the target is also projected on the XY plane. Thus, the target detection method can convert a three-dimensional space problem into a two-dimensional plane problem, a problem that a target point is in a space polyhedral, and a problem that a plane point is in a polygonal area.
Specifically, referring to fig. 14, a regular area 62 in the figure is a projection of the effective regular space on the XY plane, i.e., the Z =0 plane, and a projection of the target point on the XY plane is a point 61. The target detection method directly determines whether the point 61 is in the regular area 62, and if so, determines that the target is also in the valid regular space.
In this embodiment, the target detection method adopts the second effective rule space in the above embodiment of the space drawing method to determine whether the target is in the effective rule space; specifically, the target detection method extends a ray from the three-dimensional coordinate of the target, and quickly judges whether the target is in the effective regular space according to the number of planes of the ray passing through the segmentation plane.
With respect to step S301 in the embodiment shown in fig. 13, the present application further proposes another specific method. Referring to fig. 15, fig. 15 is a flowchart illustrating a second embodiment of the object detection method of the present application.
In particular, in the above step S301, for some cases, the division plane and the preset plane cannot be formed to close the space. As shown in fig. 15, the detection device draws three directional line segments in the monitor image. An effective rule space formed by a segmentation plane where the directed line segments are located and a preset plane is not closed, on one hand, the extension value of the effective rule space in the Z-axis direction is infinite, and on the other hand, the rule space formed by the three directed line segments in the graph is not closed in the XY plane. The target detection error occurs due to the fact that the effective rule space is not closed, and the wrong target detection result is obtained. Therefore, the target detection method of this embodiment performs a closing process on the unclosed effective rule space, and includes the following specific steps:
s401: and judging whether the number of the division planes is more than or equal to four.
The detection device judges whether the number of the segmentation planes is more than or equal to four so as to judge whether an effective regular space is closed or not, particularly an irregular polyhedral space. If the valid rule space is not closed, step S402 is performed to close the valid rule space, so as to improve the accuracy of target detection.
S402: and adding directional line segments and corresponding segmentation planes to form a closed space in the effective regular space.
In order to seal the effective rule space, the detection device adds directional line segments and corresponding segmentation planes to form a space boundary of the effective rule space. For example, for an effective rule space with infinite extension values in the Z-axis direction, the detection apparatus may add a partition plane with Z being a preset value, such as when the space boundary of the effective rule space is (-1000 pieces of yarn-over x-pieces 1000, -1000 pieces of yarn-over y-pieces 1000), the effective rule space is an open space; after the division plane with Z =1000 is added, the space boundary of the new effective rule space is (-1000 pieces of yarn-woven fabric x-pieces 1000, -1000 pieces of yarn-woven fabric y-pieces 1000, -1000 pieces of yarn-woven fabric Z-pieces 1000), and the new effective rule space is a closed space. And a closed effective rule space is adopted for target detection, so that the detection accuracy can be improved.
In this embodiment, the detection device detects whether the effective rule space is a closed space, and if not, the detection device adds a directed line segment and a corresponding division plane to form a closed space in the effective rule space; the detection device adopts a closed effective regular space, so that the accuracy of target detection can be improved.
To implement the space mapping method and/or the target detection method of the foregoing embodiments, the present application further provides a detection apparatus, and please refer to fig. 16 specifically, where fig. 16 is a schematic structural diagram of an embodiment of the detection apparatus of the present application.
The detection apparatus 700 comprises a memory 71 and a processor 72, wherein the memory 71 and the processor 72 are coupled.
The memory 71 is used for storing program data and the processor 72 is used for executing the program data to implement the space rendering method and/or the object detection method of the above-described embodiments.
In the present embodiment, the processor 72 may also be referred to as a CPU (Central Processing Unit). The processor 72 may be an integrated circuit chip having signal processing capabilities. The processor 72 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 72 may be any conventional processor or the like.
The present application also provides a computer storage medium, such as that shown in fig. 17, a computer storage medium 800 for storing program data which, when executed by a processor, is adapted to implement a method as described in embodiments of the present application space rendering method and/or object detection method.
The methods involved in the embodiments of the space rendering method and/or the object detection method of the present application, when implemented in the form of software functional units and sold or used as independent products, may be stored in a device, for example, a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an embodiment of the present application, and is not intended to limit the scope of the present application, and all equivalent structures or equivalent processes performed by the present application and the contents of the attached drawings, which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (11)

1. A spatial rendering method, characterized in that the spatial rendering method comprises:
acquiring image information and spatial depth information corresponding to the image information;
setting a preset plane according to the image information and the spatial depth information, and forming a first effective rule space by the preset plane according to a preset rule;
and drawing at least one directed line segment in the first effective rule space according to the space depth information, and generating a second effective rule space according to the at least one directed line segment and the first effective rule space.
2. The spatial rendering method of claim 1, wherein the step of setting a first valid rule space according to a preset rule by the preset plane further comprises:
and setting a preset normal corresponding to the preset plane according to the spatial depth information, and taking a space extending from the preset plane to the preset normal as a first effective rule space.
3. The spatial rendering method of claim 1, wherein the step of rendering at least one directional line segment within the first active rule space according to the spatial depth information further comprises:
and drawing at least one directed line segment on the image, and converting the two-dimensional information of the at least one directed line segment into three-dimensional information according to the spatial depth information so as to draw the at least one directed line segment in the first effective regular space.
4. The spatial rendering method of claim 3, wherein the directed line segment comprises a start point and an end point;
the step of converting the two-dimensional information of the at least one directed line segment into three-dimensional information according to the spatial depth information further includes:
acquiring two-dimensional information and spatial depth information of the starting point and the ending point;
calculating a plurality of pieces of spatial depth information of pixel points in a preset range of the starting point and the ending point, and acquiring an average value or a median value of the plurality of pieces of spatial depth information;
and determining the three-dimensional information of the starting point and the ending point according to the average value or the median value and the two-dimensional information, and forming a directed line segment in the first effective rule space by the three-dimensional information of the starting point and the ending point.
5. The spatial rendering method of claim 1, wherein the step of generating a second valid rule space from the at least one directed line segment and the first valid rule space further comprises:
generating a segmentation plane intersected with the preset plane according to the at least one directed line segment, obtaining a segmentation normal of the segmentation plane, and taking a space extending from the segmentation plane to the segmentation normal as a segmentation space;
and taking a coincidence space of the division space and the first effective rule space as the second effective rule space.
6. The spatial rendering method according to claim 1, wherein the step of obtaining spatial depth information corresponding to the image information further comprises:
and acquiring space depth information corresponding to the image information through binocular stereo contour modeling, structured light three-dimensional modeling or flight time three-dimensional modeling.
7. An object detection method, characterized in that the object detection method comprises:
acquiring image information and an effective rule space corresponding to the image information, wherein the effective rule space is a second effective rule space in the space rendering method of any one of claims 1 to 6;
acquiring at least one segmentation plane of the effective rule space, acquiring a target pixel point in the image information, and acquiring three-dimensional information of the pixel point according to the spatial depth information corresponding to the image information;
setting a ray by taking the pixel point as a starting point, and calculating the number of planes of the ray passing through the segmentation plane;
if the number is an odd number, determining that the target is in the effective rule space; if the number is an even number, determining that the target is outside the effective rule space;
wherein at least one of the division planes is perpendicular to a preset plane of the effective rule space.
8. The object detection method of claim 7, wherein the step of obtaining at least one partition plane of the active rule space further comprises:
judging whether the number of the segmentation planes is more than or equal to four;
if not, adding directed line segments and corresponding segmentation planes to enable the effective regular space to form a closed space.
9. The object detection method according to claim 7, characterized in that the object detection method further comprises:
if the segmentation plane is perpendicular to the preset plane, judging whether the projection of the pixel point of the target on the preset plane is in the effective rule space;
if yes, judging that the target is in the effective rule space;
if not, the target is judged to be out of the effective rule space.
10. A detection apparatus, comprising a memory and a processor, wherein the memory is coupled to the processor;
wherein the memory is configured to store program data, and the processor is configured to execute the program data to implement the spatial rendering method of any one of claims 1 to 6 and/or the object detection method of any one of claims 7 to 9.
11. A computer storage medium for storing program data which, when executed by a processor, is adapted to implement the spatial rendering method of any one of claims 1 to 6 and/or the object detection method of any one of claims 7 to 9.
CN201910258814.8A 2019-04-01 2019-04-01 Space rendering method, target detection method, detection device, and storage medium Active CN110070606B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910258814.8A CN110070606B (en) 2019-04-01 2019-04-01 Space rendering method, target detection method, detection device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910258814.8A CN110070606B (en) 2019-04-01 2019-04-01 Space rendering method, target detection method, detection device, and storage medium

Publications (2)

Publication Number Publication Date
CN110070606A CN110070606A (en) 2019-07-30
CN110070606B true CN110070606B (en) 2023-01-03

Family

ID=67366800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910258814.8A Active CN110070606B (en) 2019-04-01 2019-04-01 Space rendering method, target detection method, detection device, and storage medium

Country Status (1)

Country Link
CN (1) CN110070606B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274943B (en) * 2020-01-19 2023-06-23 深圳市商汤科技有限公司 Detection method, detection device, electronic equipment and storage medium
CN111274642B (en) * 2020-01-23 2023-11-21 久瓴(江苏)数字智能科技有限公司 Method and device for generating pottery tile sloping roof, computer equipment and storage medium
CN114742971B (en) * 2022-04-06 2023-03-21 电子科技大学 Plane detection method based on wire frame representation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004172754A (en) * 2002-11-18 2004-06-17 Sumitomo Osaka Cement Co Ltd Monitoring apparatus and monitoring method
CN103279943A (en) * 2013-04-18 2013-09-04 深圳市中瀛鑫科技股份有限公司 Target invasion detection method and device, and video monitoring system
CN104935893A (en) * 2015-06-17 2015-09-23 浙江大华技术股份有限公司 Monitoring method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004172754A (en) * 2002-11-18 2004-06-17 Sumitomo Osaka Cement Co Ltd Monitoring apparatus and monitoring method
CN103279943A (en) * 2013-04-18 2013-09-04 深圳市中瀛鑫科技股份有限公司 Target invasion detection method and device, and video monitoring system
CN104935893A (en) * 2015-06-17 2015-09-23 浙江大华技术股份有限公司 Monitoring method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向大规模场景的三维线段检测算法;陈庭旺等;《计算机辅助设计与图形学学报》;20110515(第05期);全文 *

Also Published As

Publication number Publication date
CN110070606A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
CN110070606B (en) Space rendering method, target detection method, detection device, and storage medium
EP3016071B1 (en) Estimating device and estimation method
US9552514B2 (en) Moving object detection method and system
CN110276829B (en) Three-dimensional representation by multi-scale voxel hash processing
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
JP6471448B2 (en) Noise identification method and noise identification apparatus for parallax depth image
CN113196296A (en) Detecting objects in a crowd using geometric context
KR20180004766A (en) Method for detecting collision between collision bodies of real time virtual scene and terminal and storage medium
US8565557B2 (en) Free view generation in ray-space
WO2021052283A1 (en) Method for processing three-dimensional point cloud data and computing device
US20170337665A1 (en) Method and device for the real-time adaptive filtering of noisy depth or disparity images
CN109658497B (en) Three-dimensional model reconstruction method and device
WO2020199562A1 (en) Depth information detection method, apparatus and electronic device
CN111630342A (en) Gap detection method and system for visual welding system
CN112465911A (en) Image processing method and device
JP7247573B2 (en) 3D geometric model generation device, 3D geometric model generation method, and program
US20220358694A1 (en) Method and apparatus for generating a floor plan
CN113744416B (en) Global point cloud filtering method, equipment and storage medium based on mask
CN113807182B (en) Method, device, medium and electronic equipment for processing point cloud
CN115494856A (en) Obstacle avoidance method and device, unmanned aerial vehicle and electronic equipment
CN113096024B (en) Flying spot removing method for depth data, system and electronic equipment thereof
CN104156973A (en) Real-time three-dimensional video monitoring method based on stereo matching
Džijan et al. Towards fully synthetic training of 3D indoor object detectors: Ablation study
CN113592976A (en) Map data processing method and device, household appliance and readable storage medium
WO2020042030A1 (en) Gap detection method and system for visual welding system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant