CN107666563B - Anti-collision detection method and system applied to camera - Google Patents

Anti-collision detection method and system applied to camera Download PDF

Info

Publication number
CN107666563B
CN107666563B CN201710792373.0A CN201710792373A CN107666563B CN 107666563 B CN107666563 B CN 107666563B CN 201710792373 A CN201710792373 A CN 201710792373A CN 107666563 B CN107666563 B CN 107666563B
Authority
CN
China
Prior art keywords
camera
collision
current
processing device
virtual model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710792373.0A
Other languages
Chinese (zh)
Other versions
CN107666563A (en
Inventor
赵云
金世杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Media Tech Co ltd
Original Assignee
Shanghai Media Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Media Tech Co ltd filed Critical Shanghai Media Tech Co ltd
Priority to CN201710792373.0A priority Critical patent/CN107666563B/en
Publication of CN107666563A publication Critical patent/CN107666563A/en
Application granted granted Critical
Publication of CN107666563B publication Critical patent/CN107666563B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment

Abstract

The invention discloses an anti-collision detection method and system applied to a camera, belonging to the technical field of broadcasting; the method comprises the following steps: respectively constructing a first virtual model and a second virtual model of a presentation scene, capturing relative coordinate points of the second virtual model of each camera in the first virtual model, establishing collision grating ranges of the cameras, judging whether the different collision grating ranges are contacted or not, if so, indicating that collision between the cameras is about to occur, and sending a stop instruction to the corresponding cameras by a processing device to control the cameras to stop moving. The beneficial effects of the above technical scheme are: the problems of mutual collision and interference of the cameras in the playing scene are avoided, the normal and orderly operation of the playing process is ensured, and the safety of the working environment of the cameras is ensured.

Description

Anti-collision detection method and system applied to camera
Technical Field
The invention relates to the technical field of broadcasting, in particular to an anti-collision detection method and system applied to a camera.
Background
The studio is a place for creating space art by light and sound, and is a conventional place for television program production. With the continuous development of scientific technology, a large-scene studio hall begins to appear, and in order to meet the appreciation requirements of audiences for different viewing angles, a plurality of cameras arranged on different machine positions need to shoot simultaneously in the studio hall, and background control personnel carry out video editing and fusion of different viewing angles, so that television program pictures which can be seen by the audiences on a television are finally formed.
Further, in order to consider the effect of broadcasting, the cameras installed at different positions need to be constantly moved according to the requirements of the shooting contents or the shooting effect, such as horizontal movement or height elevation. Some cameras capable of automatically controlling to move while shooting according to a series of commands input from the outside have begun to move, but the movement of the cameras is irregular, and in some scenes including a plurality of cameras moving simultaneously, if the movement of the cameras is not limited, the problem of mutual collision may occur, so that the normal operation of the cameras and the normal production of television programs are affected, and even a safety accident may be caused, which endangers the personal safety of television program producers.
Disclosure of Invention
According to the problems in the prior art, an anti-collision detection method and system applied to cameras are provided, which aim to avoid the problem that the cameras collide and interfere with each other in a studio, ensure the normal and orderly progress of the studio process, and ensure the safety of the working environment of the cameras.
An anti-collision detection method applied to cameras is suitable for a presentation scene provided with a plurality of cameras, and the cameras move in the presentation scene according to an externally input instruction; wherein, pre-importing a first virtual model of the presentation scene and a second virtual model of each camera in a processing device, and updating a relative coordinate point of the second virtual model in the first virtual model according to a real-time position of the camera, further comprising:
step S1, the processing device acquires the relative coordinate point of each camera in the first virtual model at the current moment;
step S2, the processing device forms a collision raster range of the camera at the current time with the relative coordinate point of each camera as a reference;
step S3, the processing device determines whether two collision raster ranges collide with each other in the first virtual model at the current time:
if yes, the processing device sends an instruction of stopping movement to the corresponding camera, and then returns to the step S1;
if not, the process returns to the step S1 directly.
Preferably, the collision avoidance detection method further includes setting a compensation amount for each of the cameras with respect to a coordinate point in advance;
and processing to obtain the relative coordinate point according to the real-time position and the compensation amount after the processing device acquires the real-time position of the camera.
Preferably, the collision-prevention detection method further includes presetting a raster radius in the processing device;
in step S2, for one camera, the processing device uses the relative coordinate point as a dot and the raster radius as a radius to construct a spherical area as the collision raster range of the corresponding camera at the current time.
Preferably, in the anti-collision detection method, the grating radius is 1 meter or 2 meters.
Preferably, in the anti-collision detection method, in step S2, the method specifically includes, for one of the cameras:
step S21, the processing device continuously obtains the coordinate values of the relative coordinate points of the camera and the rotation angle of the pan-tilt of the camera with a preset time interval as a cycle;
step S22, the processing device processes the coordinate value of the relative coordinate point at the current time and the coordinate value of the relative coordinate point at the previous time to obtain the current moving speed of the camera;
the processing device processes the coordinate value of the relative coordinate point at the current moment and the coordinate value of the relative coordinate point at the previous moment to obtain the current lifting speed of the camera; and
the processing device is used for processing the rotation angle of the tripod head at the current moment and the rotation angle of the tripod head at the previous moment to obtain the current rotation angular velocity of the camera;
step S23, the processing device processes the current moving speed, the current lifting speed and the current rotation angular speed of the camera to obtain the braking distance from the current operation state to the stop operation at each angle surrounding the camera;
step S24, the processing device obtains the collision raster range of the camera at the current time by processing according to the braking distance at each angle surrounding the camera.
Preferably, the collision avoidance detection method further includes presetting the time period to be 40 milliseconds.
Preferably, in the anti-collision detection method, in the first virtual model, third virtual models are respectively set for actual objects in the presentation scene;
while executing the step S3, the processing device further executes the following detection steps:
step a1, the processing device determines whether there is a collision between the collision raster range and any one of the third virtual models at the current time:
if yes, the processing device sends an instruction of stopping movement to the corresponding camera, and then returns to the step S1;
if not, the process returns to the step S1 directly.
Preferably, the anti-collision detection method includes providing a display interface to display the first virtual model and the second virtual model in real time for a user to view;
when the processing device determines that the two collision raster ranges collide with each other at the current moment in step S3, outputting corresponding alarm information on the display interface.
Preferably, the anti-collision detection method includes providing a display interface to display the first virtual model and the second virtual model in real time for a user to view;
when the processing device determines that the two collision raster ranges collide with each other at the current moment in step a1, outputting corresponding alarm information on the display interface.
An anti-collision detection system applied to cameras is suitable for a presentation scene provided with a plurality of cameras, and the cameras move in the presentation scene according to an externally input instruction; wherein, include:
the processing device is internally pre-imported with a first virtual model of the studio and a second virtual model of each camera, and updates relative coordinate points of the second virtual model in the first virtual model according to the real-time positions of the cameras;
the processing device specifically comprises:
an acquiring unit, configured to acquire the relative coordinate point of each camera in the first virtual model at the current time;
the grating forming unit is connected with the acquisition unit and forms a collision grating range of the camera at the current moment by taking the relative coordinate point of each camera as a reference;
the first collision judgment unit is connected with the grating formation unit and used for judging whether two collision grating ranges collide with each other in the first virtual model at the current moment and outputting a first judgment result;
and the processing device sends a movement stopping instruction to the corresponding camera when the two collision grating ranges collide with each other according to the first judgment result.
Preferably, the collision avoidance detection system is configured such that a compensation amount for a relative coordinate point is preset for each of the cameras;
the processing device specifically comprises:
and the coordinate conversion unit is used for acquiring the real-time position of the camera and processing the real-time position and the compensation amount to obtain the relative coordinate point.
Preferably, the anti-collision detection system, wherein the grating formation unit specifically includes:
the first forming module is used for constructing a spherical area as the corresponding collision grating range of the camera at the current moment by taking the grating radius as the radius, wherein the relative coordinate point of the camera at the current moment is a round point.
Preferably, the anti-collision detection system, wherein the grating formation unit specifically includes:
the first acquisition module is used for continuously acquiring the coordinate values of the relative coordinate points of the camera by taking a preset time interval as a period;
the second acquisition module is used for continuously acquiring the rotation angle of the holder of the camera by taking the preset time interval as a period;
the first processing module is connected with the first acquisition module and used for processing according to the coordinate value of the relative coordinate point at the current moment and the coordinate value of the relative coordinate point at the previous moment to obtain the current moving speed of the camera;
the second processing module is connected with the first acquisition module and used for processing according to the coordinate value of the relative coordinate point at the current moment and the coordinate value of the relative coordinate point at the previous moment to obtain the current lifting speed of the camera;
the third processing module is connected with the second acquisition module and used for processing according to the rotating angle of the holder at the current moment and the rotating angle of the holder at the previous moment to obtain the current rotating angular velocity of the camera;
the fourth processing module is respectively connected with the first processing module, the second processing module and the third processing module and is used for processing according to the current moving speed, the current lifting speed and the current rotation angular speed of the camera to obtain the braking distance from the current running state to the stop running of the camera at each angle within the 360-degree range;
and the second forming module is connected with the fourth processing module and used for processing according to the braking distance of each angle in the 360-degree range of the camera to obtain the range of the collision grating of the camera at the current moment.
Preferably, in the anti-collision detection system, the first virtual model of the presentation scene, which is pre-imported in the processing device, includes third virtual models respectively set for actual objects in the presentation scene;
the processing device further comprises:
the second collision judgment unit is connected with the grating forming unit and used for judging whether the collision grating range and any one third virtual model collide with each other at the current moment or not and outputting a second judgment result;
and the processing device sends a movement stopping instruction to the corresponding camera when the collision grating range and one third virtual model collide with each other according to the second judgment result.
The beneficial effects of the above technical scheme are:
1) the anti-collision detection method applied to the cameras can avoid the problem that the cameras collide and interfere with each other in a presentation scene, ensure the normal and orderly progress of the presentation process and ensure the safety of the working environment of the cameras.
2) The anti-collision detection system applied to the camera can support the implementation of the anti-collision detection method.
Drawings
Fig. 1 is a schematic general flow chart of an anti-collision detection method applied to a camera according to a preferred embodiment of the present invention;
FIG. 2 is a schematic diagram of the range of the impinging grating in a preferred embodiment of the present invention;
FIG. 3 is a flow chart illustrating the process of obtaining the range of the impact grating for each camera in accordance with a preferred embodiment of the present invention;
FIG. 4 is a schematic flow chart of a preferred embodiment of the present invention, which further determines whether the camera collides with the actual scene based on FIG. 1;
fig. 5 is a schematic diagram of the general structure of an anti-collision detection system applied to a camera according to a preferred embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a grating forming unit based on FIG. 5 according to a preferred embodiment of the present invention;
fig. 7 is a schematic structural diagram of a grating forming unit based on fig. 5 in another preferred embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
According to the above problems in the prior art, an anti-collision detection method applied to cameras is provided, which is suitable for a playing scene with a plurality of cameras, wherein the plurality of cameras move in the playing scene according to an externally input instruction; wherein, a first virtual model of a presentation scene and a second virtual model of each camera are pre-imported in a processing device, and a relative coordinate point of the second virtual model in the first virtual model is updated according to a real-time position of the camera, further comprising the following steps as shown in fig. 1:
step S1, the processing device acquires relative coordinate points of each camera in the first virtual model at the current moment;
step S2, the processing device forms the collision raster range of the camera at the current time by taking the relative coordinate point of each camera as a reference;
in step S3, the processing device determines whether two collision raster ranges exist in the first virtual model at the current time and collide with each other:
if yes, the processing device sends an instruction of stopping movement to the corresponding camera, and then returns to step S1;
if not, the process returns to step S1.
Specifically, in this embodiment, a first virtual model of the scene is constructed in advance according to the presentation scene, and each point in the space of the actual presentation scene corresponds to a coordinate point in the first virtual model, that is, the actual presentation scene is completely mapped into the three-dimensional virtual model. Similarly, each camera operating in the presentation scene is also mapped into a three-dimensional second virtual model and set in the first virtual model. The position of each second virtual model can therefore be located in the first virtual model using a relative coordinate, i.e. the relative position of the camera with respect to the actual presentation scene.
In this embodiment, each camera is provided with a position uploading device, the position uploading device is configured to upload a current real-time position of the camera in real time, and the processing device updates a relative coordinate point of each second virtual model with respect to the first virtual model according to the uploaded real-time position. Further, the position uploaded by each camera to the processing device in real time is previously processed into the relative position of the camera with respect to the presentation scene, specifically, after the camera acquires the actual position (for example, GPS position information) of the camera, the actual position is converted into the relative position with respect to the presentation scene according to the preset relative position relationship between the camera and the presentation scene, and the relative position is uploaded to the processing device as the real-time position at the current time. The coordinate processing in the camera is a technology generally adopted by the existing movable camera applied in the studio, and is not described herein again.
In this embodiment, after acquiring the relative coordinate point of each camera at the current position, the processing device generates the collision raster range of each camera according to the relative coordinate point. Specifically, the range of the collision grating may be a range surrounded by the relative coordinate points, and the range is a collision area of the corresponding camera, as shown in fig. 2. When the collision raster ranges of the two cameras collide or contact with each other, which means that the two cameras are likely to collide soon, the processing device sends a command of stopping movement to the corresponding two cameras, and the corresponding two cameras receive the command and stop moving so as to avoid possible collision.
In this embodiment, the processing device continuously obtains the real-time position reported by each camera and continuously updates the relative coordinate point of the corresponding second virtual model in the first virtual model, and updates the collision grating range of each camera according to the relative coordinate point, so as to detect a collision event that may occur, and control the camera to stop moving before the collision occurs, thereby ensuring normal and orderly performance of the presentation process.
In a preferred embodiment of the present invention, a compensation amount of a relative coordinate point is preset for each camera;
and processing to obtain a relative coordinate point according to the real-time position and the compensation amount after the processing device acquires the real-time position of the camera.
Specifically, cameras operating in a presentation scene may belong to different camera control systems, and camera coordinate systems of different camera control systems are different, specifically, the origin points of the coordinate systems are different, so that after the cameras report real-time positions to the processing device, the processing device needs to perform compensation calculation on the real-time positions. Specifically, an origin coordinate exists in the first virtual model, the processing device obtains the origin coordinate of the coordinate system in the camera control system currently in use in advance, and calculates in advance the offset (Δ x, Δ y, Δ z) between the origin coordinate in each camera control system and the origin coordinate in the first virtual model, where the offset is the compensation amount of each camera under the camera control system.
The processing means may perform a coordinate compensation of the real-time position of the camera according to the following formula
Figure BDA0001399545200000081
Where (CAMxX, CAMxY, CAMxZ) are relative coordinate points of the camera CAMx obtained by coordinate compensation by the processing device, (x, y, z) are coordinates indicating a real-time position of the camera CAMx, and (Δ x, Δ y, Δ z) are coordinates indicating the compensation amount.
In a preferred embodiment of the present invention, as shown in fig. 2, a grating radius is preset in the processing device;
then, in step S2, the processing device constructs a spherical area for one camera with the relative coordinate point as a dot and the raster radius as a radius as the collision raster range of the corresponding camera at the current time.
In the method, after a current relative coordinate point of a camera is obtained, a spherical area with the relative coordinate point as a circular point and a preset grating radius as a radius is formed as the current collision grating range of the camera. In other words, during the movement of each camera, a spherical area 21 is constructed by taking the camera as the center of the circle as the collision raster range of the camera, namely as the collision buffer area of the camera. When the collision raster range of other cameras is contacted with the collision raster range, the two cameras can be judged to be about to collide, and the processing device can send a stop instruction to the two cameras about to collide so as to prevent the two cameras from colliding in advance. The spherical region 21 is shown schematically in a cut-away view in fig. 2.
Further, in the preferred embodiment of the present invention, there are two types of mobile cameras used in the presentation scene, wherein one type of camera is fixed on the fixed base with the cradle head and the pulley at the bottom, and can move towards any direction; the other is that the camera and the holder are connected on a fixed track and can only move towards the laying direction of the track.
Since the fixed base of the first camera is usually designed to have a length and width dimension of 1m x 1m, the grating radius can be set to 2 meters for this camera.
For the second camera, since there is no extra base, the raster radius of the second camera is only required to be set to 1 meter.
In this embodiment, the range of the collision raster formed in the above manner is a relatively fixed area, that is, regardless of the current operating state of the camera, the range of the collision raster is a spherical area 21 formed by taking the relative coordinate point of the camera as the center of a circle and taking the preset raster radius as the radius.
In other embodiments of the present invention, the grating radius may be set according to the size of the camera and its auxiliary devices (e.g., the fixed base, the pan/tilt head, and the rotating device), so that the camera has a relatively sufficient collision buffer distance.
In another preferred embodiment of the present invention, in the step S2, the step of forming the collision grating range for one camera is specifically as shown in fig. 3, and includes:
step S21, the processing device continuously acquires the coordinate values of the relative coordinate points of the camera and the rotation angle of the pan/tilt head of the camera with a preset time interval as a cycle;
step S22, the processing device processes the coordinate value of the relative coordinate point at the current moment and the coordinate value of the relative coordinate point at the previous moment to obtain the current moving speed of the camera;
the processing device processes the coordinate value of the relative coordinate point at the current moment and the coordinate value of the relative coordinate point at the previous moment to obtain the current lifting speed of the camera; and
the processing device processes the current rotation angular velocity of the camera according to the rotation angle of the holder at the current moment and the rotation angle of the holder at the previous moment;
step S23, the processing device processes the current moving speed, the current lifting speed and the current rotation angular speed of the camera to obtain the braking distance from the current running state to the stop running at each angle surrounding the camera;
in step S24, the processing device obtains the collision raster range of the camera at the current time by performing a braking distance process at each angle surrounding the camera.
Specifically, unlike the previous embodiment, the collision raster range of each camera in the present embodiment is dynamic, and may specifically be changed according to the current moving speed, the lifting speed, the rotation angular speed, and other running state information of the camera.
In this embodiment, the preset time interval may preferably be 40 milliseconds, that is, the processing device takes the coordinate values of the relative coordinate points of the camera and the corresponding data related to the operating state (such as the current rotation angle of the pan/tilt head) every 40 milliseconds. Accordingly, it can also be understood that the camera uploads the coordinate values of the relative coordinate points and the corresponding data relating to the operating state to the processing device every 40 milliseconds.
In this embodiment, after the processing device obtains the data, the current moving speed of the camera may be obtained by processing according to the coordinate value of the relative coordinate point at the current time and the coordinate value of the relative coordinate point at the previous time, and specifically, the current moving speed of the camera may be obtained by processing according to the following formula:
Figure BDA0001399545200000101
where v is used to indicate the current moving speed of the camera, and the coordinates of the relative coordinate point at the current time of the camera are (x)1,y1,z1) The coordinate of the relative coordinate point at a time (i.e. a time before the preset time interval) on the camera is (x)0,y0,z0) And t is used to indicate the preset time interval.
Similarly, in this embodiment, the processing device may further process the coordinate value of the relative coordinate point at the current time and the coordinate value of the relative coordinate point at the previous time to obtain the current elevating speed of the camera, specifically according to the following formula:
Figure BDA0001399545200000102
where Z is used to indicate the current lifting speed of the camera and the remaining variables are as defined above.
Similarly, in this embodiment, the processing device may further obtain the current angular velocity of the camera according to the rotation angle of the pan/tilt head at the current time and the rotation angle of the pan/tilt head at the previous time, specifically according to the following formula:
Figure BDA0001399545200000103
where ω is used to represent the current angular velocity of rotation of the camera, ω1For indicating the current angle of rotation, omega, of the camera head0For indicating one of the camera headsThe remaining variables of the angle of rotation at the moment are as defined above.
In this embodiment, after the current moving speed, the current elevating speed, and the current rotation angular speed of the camera are obtained by calculation, the braking distance required for the camera to stop from the current operating state at each angle surrounding the camera may be calculated. Since the weight, load, and motion characteristics of the cameras are not consistent, the calculation methods of the cameras are also not consistent. The resulting bump grating still has an irregular three-dimensional area 22 (as shown in fig. 2).
In a preferred embodiment of the present invention, in the first virtual model, third virtual models are respectively set for actual objects in the presentation scene;
while executing step S3, the processing device further executes the following detection steps as shown in fig. 4:
step a1, the processing device determines whether there is a collision raster range at the current time that collides with any one of the third virtual models:
if yes, the processing device sends an instruction of stopping movement to the corresponding camera, and then returns to step S1;
if not, the process returns to step S1.
Specifically, in this embodiment, in addition to detecting whether the cameras collide with each other, the system needs to detect whether the cameras collide with objects in the actual presentation scene, such as tables and chairs placed and walls placed in the actual presentation scene, so that the boundaries of the actual objects, walls and the like in the presentation scene need to be formed into corresponding third virtual models in advance and set in the first virtual model representing the presentation scene. The third virtual model is also a three-dimensional model. And then detecting whether the collision raster range of the camera collides with any one of the third virtual models, and sending a stopping instruction to control the corresponding camera to stop moving when the collision occurs.
In fig. 4, the step of detecting whether the camera collides with the actual object in step S3 and the step of detecting whether the camera collides with the actual object in step a1 may be performed simultaneously, and are not in sequence.
In a preferred embodiment of the present invention, a display interface is further provided for displaying the first virtual model and the second virtual model in real time for a user to view;
when the processing device determines that two collision raster ranges collide with each other at the current moment in step S3, a corresponding alarm message is output on the display interface.
Specifically, when collision may occur between two cameras, the position of the corresponding second virtual model is lit in the first virtual model displayed on the display interface to form the alarm information, or the alarm information is formed in a direct text alarm manner, which is not described in detail herein.
In a preferred embodiment of the present invention, similarly, when the processing device determines in step a1 that there are two collision raster ranges colliding with each other at the current time, a corresponding alarm message is output on the display interface. The formation of the alarm information is similar to that described above, and is not described herein again.
In a preferred embodiment of the present invention, based on the collision detection method described above, a collision detection system applied to cameras is provided, which is also applicable to a presentation scene with multiple cameras, and the multiple cameras move in the presentation scene according to an externally input instruction.
The collision detecting system is specifically shown in fig. 5, and includes:
the processing device 51 is used for importing a first virtual model of a broadcasting scene and a second virtual model of each camera in advance, and updating a relative coordinate point of the second virtual model in the first virtual model according to the real-time position of the camera;
the processing device 51 specifically includes:
an acquiring unit 511, configured to acquire a relative coordinate point of each camera in the first virtual model at the current time;
a grating forming unit 512 connected to the acquiring unit 511, for forming a collision grating range of the camera at the current time with the relative coordinate point of each camera as a reference;
a first collision judgment unit 513, connected to the grating formation unit 512, configured to judge whether two collision grating ranges collide with each other in the first virtual model at the current time, and output a first judgment result;
the processing device 51 sends a command to stop the movement to the corresponding camera when there is a collision between the two collision raster ranges according to the first determination result.
In a preferred embodiment of the present invention, a compensation amount of a relative coordinate point is preset for each camera;
as still shown in fig. 5, the processing unit 51 specifically includes:
and a coordinate conversion unit 514, configured to acquire the real-time position of the camera, and process the real-time position and the compensation amount to obtain a relative coordinate point.
In a preferred embodiment of the present invention, as shown in fig. 6, the grating forming unit 512 specifically includes:
the first forming module 61 is used for constructing a spherical area as a collision raster range of the corresponding camera at the current moment by taking the raster radius as the radius and taking a relative coordinate point of the camera at the current moment as a dot in the first forming module 61.
In another preferred embodiment of the present invention, as shown in fig. 7, the grating forming unit 512 specifically includes:
a first obtaining module 71, configured to continuously obtain coordinate values of relative coordinate points of the camera at a preset time interval as a cycle;
a second obtaining module 72, configured to continuously obtain a rotation angle of the pan/tilt head of the camera at a preset time interval as a cycle;
the first processing module 73 is connected to the first obtaining module 71, and is configured to process the coordinate values of the relative coordinate points at the current time and the coordinate values of the relative coordinate points at the previous time to obtain the current moving speed of the camera;
the second processing module 74 is connected to the first obtaining module 71, and is configured to process the coordinate value of the relative coordinate point at the current time and the coordinate value of the relative coordinate point at the previous time to obtain the current lifting speed of the camera;
the third processing module 75 is connected to the second obtaining module 72, and is configured to process the current rotational angular velocity of the camera according to the rotational angle of the pan/tilt at the current time and the rotational angle of the pan/tilt at the previous time;
a fourth processing module 76, respectively connected to the first processing module 73, the second processing module 74 and the third processing module 75, for processing the current moving speed, the current elevating speed and the current rotation angular speed of the camera to obtain the braking distance from the current operation state to the stop operation at each angle surrounding the camera;
and the second forming module 77 is connected to the fourth processing module 76, and is used for obtaining the collision raster range of the camera at the current moment according to the braking distance processing at each angle surrounding the camera.
In a preferred embodiment of the present invention, the first virtual model of the presentation scene pre-imported in the processing device 51 includes third virtual models respectively set for actual objects in the presentation scene;
as also shown in fig. 5, the processing device 51 further includes:
a second collision judgment unit 515, connected to the grating formation unit 512, configured to judge whether there is a collision grating range colliding with any one of the third virtual models at the current time, and output a second judgment result;
the processing device 51 sends a command to stop the movement to the corresponding camera when there is a collision between the collision raster range and a third virtual model based on the second determination result.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (12)

1. An anti-collision detection method applied to cameras is suitable for a presentation scene provided with a plurality of cameras, and the cameras move in the presentation scene according to an externally input instruction; the method is characterized in that a first virtual model of the presentation scene and a second virtual model of each camera are pre-imported into a processing device, and a relative coordinate point of the second virtual model in the first virtual model is updated according to the real-time position of the camera, and the method further comprises the following steps:
step S1, the processing device acquires the relative coordinate point of each camera in the first virtual model at the current moment;
step S2, the processing device forms a collision raster range of the camera at the current time with the relative coordinate point of each camera as a reference;
step S3, the processing device determines whether two collision raster ranges collide with each other in the first virtual model at the current time:
if yes, the processing device sends an instruction of stopping movement to the corresponding camera, and then returns to the step S1;
if not, directly returning to the step S1;
in step S2, the step of forming the collision grating range for one of the cameras specifically includes: step S21, the processing device continuously obtains the coordinate values of the relative coordinate points of the camera and the rotation angle of the pan-tilt of the camera with a preset time interval as a cycle;
step S22, the processing device processes the coordinate value of the relative coordinate point at the current time and the coordinate value of the relative coordinate point at the previous time to obtain the current moving speed of the camera;
the processing device processes the coordinate value of the relative coordinate point at the current moment and the coordinate value of the relative coordinate point at the previous moment to obtain the current lifting speed of the camera; and
the processing device is used for processing the rotation angle of the tripod head at the current moment and the rotation angle of the tripod head at the previous moment to obtain the current rotation angular velocity of the camera;
step S23, the processing device processes the current moving speed, the current lifting speed and the current rotation angular speed of the camera to obtain the braking distance from the current operation state to the stop operation at each angle surrounding the camera;
step S24, the processing device obtains the collision raster range of the camera at the current time by processing according to the braking distance at each angle surrounding the camera.
2. The collision avoidance detection method according to claim 1, wherein a compensation amount for a relative coordinate point is preset for each of the cameras;
and processing to obtain the relative coordinate point according to the real-time position and the compensation amount after the processing unit acquires the real-time position of the camera.
3. The anti-collision detection method according to claim 1, wherein a raster radius is preset in the processing device;
in step S2, for one camera, the processing device uses the relative coordinate point as a dot and the raster radius as a radius to construct a spherical area as the collision raster range of the corresponding camera at the current time.
4. The collision avoidance detection method of claim 3 wherein the raster radius is 1 or 2 meters.
5. The collision avoidance detection method of claim 1 wherein the predetermined time period is 40 milliseconds.
6. The anti-collision detection method according to claim 1, wherein in the first virtual model, a third virtual model is respectively set for actual objects in the presentation scene;
while executing the step S3, the processing device further executes the following detection steps:
step a1, the processing device determines whether there is a collision between the collision raster range and any one of the third virtual models at the current time:
if yes, the processing device sends an instruction of stopping movement to the corresponding camera, and then returns to the step S1;
if not, the process returns to the step S1 directly.
7. The anti-collision detection method according to claim 1, wherein a display interface is provided to display the first virtual model and the second virtual model in real time for a user to view;
when the processing device determines that the two collision raster ranges collide with each other at the current moment in step S3, outputting corresponding alarm information on the display interface.
8. The anti-collision detection method according to claim 6, wherein a display interface is provided to display the first virtual model and the second virtual model in real time for a user to view;
when the processing device determines that the two collision raster ranges collide with each other at the current moment in step a1, outputting corresponding alarm information on the display interface.
9. An anti-collision detection system applied to cameras is suitable for a presentation scene provided with a plurality of cameras, and the cameras move in the presentation scene according to an externally input instruction; it is characterized by comprising:
the processing device is internally pre-imported with a first virtual model of the studio and a second virtual model of each camera, and updates relative coordinate points of the second virtual model in the first virtual model according to the real-time positions of the cameras;
the processing device specifically comprises:
an acquiring unit, configured to acquire the relative coordinate point of each camera in the first virtual model at the current time;
the grating forming unit is connected with the acquisition unit and forms a collision grating range of the camera at the current moment by taking the relative coordinate point of each camera as a reference;
the first collision judgment unit is connected with the grating formation unit and used for judging whether two collision grating ranges collide with each other in the first virtual model at the current moment and outputting a first judgment result;
the processing device sends a movement stopping instruction to the corresponding camera when the two collision grating ranges collide with each other according to the first judgment result;
wherein, the grating formation unit specifically includes:
the first acquisition module is used for continuously acquiring the coordinate values of the relative coordinate points of the camera by taking a preset time interval as a period;
the second acquisition module is used for continuously acquiring the rotation angle of the holder of the camera by taking the preset time interval as a period;
the first processing module is connected with the first acquisition module and used for processing according to the coordinate value of the relative coordinate point at the current moment and the coordinate value of the relative coordinate point at the previous moment to obtain the current moving speed of the camera;
the second processing module is connected with the first acquisition module and used for processing according to the coordinate value of the relative coordinate point at the current moment and the coordinate value of the relative coordinate point at the previous moment to obtain the current lifting speed of the camera;
the third processing module is connected with the second acquisition module and used for processing according to the rotating angle of the holder at the current moment and the rotating angle of the holder at the previous moment to obtain the current rotating angular velocity of the camera;
the fourth processing module is respectively connected with the first processing module, the second processing module and the third processing module and is used for processing according to the current moving speed, the current lifting speed and the current rotation angular speed of the camera to obtain the braking distance from the current running state to the stop running at each angle surrounding the camera;
and the second forming module is connected with the fourth processing module and used for processing according to the braking distance at each angle surrounding the camera to obtain the collision grating range of the camera at the current moment.
10. The collision avoidance detection system of claim 9, wherein an amount of compensation for a relative coordinate point is preset for each of the cameras;
the processing unit specifically comprises:
and the coordinate conversion unit is used for acquiring the real-time position of the camera and processing the real-time position and the compensation amount to obtain the relative coordinate point.
11. The anti-collision detection system according to claim 9, wherein the raster forming unit specifically includes:
the first forming module is used for constructing a spherical area as the corresponding collision grating range of the camera at the current moment by taking the grating radius as the radius, wherein the relative coordinate point of the camera at the current moment is a round point.
12. The anti-collision detection system according to claim 9, wherein the first virtual model of the presentation scene pre-imported in the processing device includes a third virtual model respectively set for actual objects in the presentation scene;
the processing device further comprises:
the second collision judgment unit is connected with the grating forming unit and used for judging whether the collision grating range and any one third virtual model collide with each other at the current moment or not and outputting a second judgment result;
and the processing device sends a movement stopping instruction to the corresponding camera when the collision grating range and one third virtual model collide with each other according to the second judgment result.
CN201710792373.0A 2017-09-05 2017-09-05 Anti-collision detection method and system applied to camera Active CN107666563B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710792373.0A CN107666563B (en) 2017-09-05 2017-09-05 Anti-collision detection method and system applied to camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710792373.0A CN107666563B (en) 2017-09-05 2017-09-05 Anti-collision detection method and system applied to camera

Publications (2)

Publication Number Publication Date
CN107666563A CN107666563A (en) 2018-02-06
CN107666563B true CN107666563B (en) 2020-04-21

Family

ID=61097247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710792373.0A Active CN107666563B (en) 2017-09-05 2017-09-05 Anti-collision detection method and system applied to camera

Country Status (1)

Country Link
CN (1) CN107666563B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629848A (en) * 2018-05-08 2018-10-09 北京玖扬博文文化发展有限公司 A kind of holding camera is in method and device within virtual scene
CN111569423B (en) * 2020-05-14 2023-06-13 北京代码乾坤科技有限公司 Method and device for correcting collision shape
CN112331001A (en) * 2020-10-23 2021-02-05 螺旋平衡(东莞)体育文化传播有限公司 Teaching system based on virtual reality technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1576123A (en) * 2003-07-03 2005-02-09 黄保家 Anticollision system for motor vehicle
CN105204625A (en) * 2015-08-31 2015-12-30 小米科技有限责任公司 Safety protection method and device for virtual reality game

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9255813B2 (en) * 2011-10-14 2016-02-09 Microsoft Technology Licensing, Llc User controlled real object disappearance in a mixed reality display

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1576123A (en) * 2003-07-03 2005-02-09 黄保家 Anticollision system for motor vehicle
CN105204625A (en) * 2015-08-31 2015-12-30 小米科技有限责任公司 Safety protection method and device for virtual reality game

Also Published As

Publication number Publication date
CN107666563A (en) 2018-02-06

Similar Documents

Publication Publication Date Title
US10780350B2 (en) Spatially and user aware second screen projection from a companion robot or device
CN107666563B (en) Anti-collision detection method and system applied to camera
JP3905116B2 (en) Detection area adjustment device
US9756277B2 (en) System for filming a video movie
US20150109446A1 (en) Vehicle monitoring device, vehicle monitoring system, terminal device, and vehicle monitoring method
JP2021508520A (en) Dynamic camera positioning using mobile robots, spatial capture by lighting, modeling, and texture reconstruction
US11798223B2 (en) Potentially visible set determining method and apparatus, device, and storage medium
JP6174968B2 (en) Imaging simulation device
US10564250B2 (en) Device and method for measuring flight data of flying objects using high speed video camera and computer readable recording medium having program for performing the same
JP4418805B2 (en) Detection area adjustment device
US20190187783A1 (en) Method and system for optical-inertial tracking of a moving object
CN111314609B (en) Method and device for controlling pan-tilt tracking camera shooting
CN111526328A (en) Video monitoring inspection method, device, terminal and storage medium
CN106375737A (en) Local shielding method and device of video image
JP5823207B2 (en) Monitoring system
US20220067974A1 (en) Cloud-Based Camera Calibration
JP2020520494A (en) Head mounted display and method
CN110460806A (en) A kind of web camera with holder realizes the algorithm of 3D positioning and privacy screen
WO2017154611A1 (en) Information processing device, information processing method, and program
EP3730899A1 (en) Method and system for the optical-inertial tracking of a mobile object
CN103869593B (en) Three-dimension imaging device, system and method
WO2022241928A1 (en) Floating dynamic projection method and system, and projection medium generator
CN113259653A (en) Method, device, equipment and system for customizing dynamic projection
WO2024019002A1 (en) Information processing method, information processing device, and information processing program
CN114745535B (en) Live broadcast display method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant