CN117132624A - Method, device, equipment and storage medium for detecting occlusion of following camera - Google Patents

Method, device, equipment and storage medium for detecting occlusion of following camera Download PDF

Info

Publication number
CN117132624A
CN117132624A CN202311404460.6A CN202311404460A CN117132624A CN 117132624 A CN117132624 A CN 117132624A CN 202311404460 A CN202311404460 A CN 202311404460A CN 117132624 A CN117132624 A CN 117132624A
Authority
CN
China
Prior art keywords
camera
following
point
target object
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311404460.6A
Other languages
Chinese (zh)
Other versions
CN117132624B (en
Inventor
刘瑞平
杜晓萌
任轶
林川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Zuowei Technology Co ltd
Original Assignee
Jinan Zuowei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Zuowei Technology Co ltd filed Critical Jinan Zuowei Technology Co ltd
Priority to CN202311404460.6A priority Critical patent/CN117132624B/en
Publication of CN117132624A publication Critical patent/CN117132624A/en
Application granted granted Critical
Publication of CN117132624B publication Critical patent/CN117132624B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a device, equipment and a storage medium for detecting occlusion of a following camera. The method comprises the steps of determining the initial position of the camera, generating rays to the observation points on the target object through the camera, updating the coordinates of the camera based on the first shielding points screened from the collision points, and effectively avoiding the mold penetration condition caused when the camera is too close to the observation target in order not to be shielded.

Description

Method, device, equipment and storage medium for detecting occlusion of following camera
Technical Field
The present invention relates to the field of three-dimensional technologies, and in particular, to a method, an apparatus, a device, and a storage medium for detecting occlusion of a following camera.
Background
At present, in a three-dimensional virtual world, a camera moves along with a target object, however, when an obstacle exists between the camera and the target object, if the camera does not make position adjustment, the camera may be blocked by the obstacle from viewing, and the following effect on the target object cannot be achieved.
In general, the camera may move to the front of the obstacle nearest to the following target object to avoid the situation of being blocked, however, this may generate the situation that the camera passes through the mold when the obstacle is very close to the following target object, which is not beneficial to the presentation of the final observation effect of the camera.
The foregoing is provided merely for the purpose of facilitating understanding of the technical scheme of the present invention and is not intended to represent an admission that the foregoing is related art.
Disclosure of Invention
The invention mainly aims to provide a method, a device, equipment and a storage medium for detecting occlusion of a follow-up camera, and aims to solve the technical problem that a mode penetrating phenomenon exists in a camera observed when an obstacle is too close to an observation target object in a three-dimensional virtual world.
In order to achieve the above object, the present invention provides a method for detecting occlusion of a following camera, the method comprising the steps of:
Determining the following coordinates of the camera in the current scene according to the real-time coordinates of the target object;
acquiring the moving direction of the target object, and determining the direction of the camera and a target observation point by combining a preset focusing position;
generating a ray from the camera to the target observation point, and judging whether the ray detects a collision point or not;
and screening out a first shielding point from the detected collision points, and updating the following coordinates of the camera in the current scene based on the first shielding point.
Optionally, the screening to obtain a first shielding point from the detected collision points, updating the following coordinates of the camera in the current scene based on the first shielding point, including:
calculating the distance between each detected collision point and the target observation point;
selecting a collision point corresponding to the minimum distance value as the first shielding point;
and updating the following coordinates of the camera in the current scene according to the coordinates of the first shielding point.
Optionally, the selecting, as the first shielding point, a collision point corresponding to the minimum distance value includes:
obtaining the distance between each collision point and the target observation point;
judging the relative position of the collision point corresponding to the first minimum value and the target object;
Based on the moving direction, judging the relative position of the collision point corresponding to the second minimum value and the target object when the collision point corresponding to the first minimum value is positioned in front of the target object;
and based on the moving direction, when the collision point corresponding to the second minimum distance value is located behind the target object, taking the collision point corresponding to the second minimum distance value as the first shielding point.
Optionally, the determining the following coordinates of the camera in the current scene according to the real-time coordinates of the target object includes:
taking a real-time plane coordinate point of the target object in the current scene as a coordinate system origin of the current scene, and determining a dynamic coordinate system of the current scene based on the target object by combining a left-hand coordinate system through the moving direction of the target object and the vertical height direction of the target object;
and determining the following coordinates of the camera in the current scene according to the determined dynamic coordinate system of the current scene.
Optionally, the acquiring the moving direction of the target object and determining the direction of the camera and the target viewpoint in combination with a preset focusing position includes:
Acquiring a moving direction of the target object, and taking the moving direction as the moving direction of the camera;
selecting a position which is a preset value from the origin of the coordinate system in the vertical height direction of the target object as a preset focusing position, and determining the target observation point;
and determining the orientation of the camera by combining the moving direction of the camera with the connecting line direction of the camera and the preset focusing position.
Optionally, the updating the following coordinates of the camera in the current scene according to the coordinates of the first shielding point includes:
acquiring a first distance value between an initial following point of the camera and the target observation point;
taking the distance value between the first shielding point and the target observation point as a second distance value, and determining a following coordinate influence parameter by combining the first distance value;
and updating the following coordinates of the camera in the current scene based on the following coordinate influence parameters.
Optionally, after the step of determining the following coordinates of the camera in the current scene according to the real-time coordinates of the target object, the method further includes:
and when the following coordinates of the camera in the current scene are obtained, moving the camera to the position corresponding to the following coordinates in the current scene in an interpolation mode, and determining the initial following point of the camera.
In addition, to achieve the above object, the present invention also proposes a following camera occlusion detection device including:
the coordinate determining module is used for determining the following coordinates of the camera in the current scene according to the real-time coordinates of the target object;
the orientation determining module is used for acquiring the moving direction of the target object and determining the orientation of the camera and a target observation point by combining a preset focusing position;
the collision detection module is used for generating a ray from the camera to the target observation point and judging whether the ray detects a collision point or not;
and the coordinate updating module is used for screening out a first shielding point from the detected collision points, and updating the following coordinates of the camera in the current scene based on the first shielding point.
In addition, to achieve the above object, the present invention also proposes a following camera occlusion detection device including: a memory, a processor, and a following camera occlusion detection program stored on the memory and executable on the processor, the following camera occlusion detection program configured to implement the steps of the following camera occlusion detection method as described above.
In addition, to achieve the above object, the present invention also proposes a storage medium having stored thereon a following camera occlusion detection program which, when executed by a processor, implements the steps of the following camera occlusion detection method as described above.
Firstly, determining the following coordinates of a camera in a current scene according to real-time coordinates of a target object; then, the moving direction of the target object is obtained, and the direction of the camera and a target observation point are determined by combining a preset focusing position; generating a ray from the camera to the target observation point, and judging whether the ray detects a collision point or not; and finally, screening out a first shielding point from the detected collision points, and updating the following coordinates of the camera in the current scene based on the first shielding point. According to the invention, the initial position and the orientation of the camera are determined by combining the following coordinates of the camera in the current scene with the target object, then a ray is generated from the observation point on the target object through the camera to perform collision detection, and the coordinates of the camera are updated according to the first shielding points obtained through screening.
Drawings
FIG. 1 is a schematic structural diagram of a following camera occlusion detection device in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of a following camera occlusion detection method according to the present invention;
FIG. 3 is a flowchart illustrating a second embodiment of a method for detecting occlusion following a camera according to the present invention;
FIG. 4 is a flowchart illustrating a following camera occlusion detection method according to a third embodiment of the present invention;
FIG. 5 is a diagram illustrating an exemplary ray detection in an x-y-z coordinate system of a current scene created based on a target object in a third embodiment of a follow-up camera occlusion detection method of the present invention;
fig. 6 is a block diagram showing the construction of a first embodiment of the following camera occlusion detection device of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a following camera occlusion detection device in a hardware running environment according to an embodiment of the present invention.
As shown in fig. 1, the following camera occlusion detection device may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (WI-FI) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) or a stable nonvolatile Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the following camera occlusion detection device, and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a following camera occlusion detection program may be included in the memory 1005 as one storage medium.
In the following camera occlusion detection device shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the following camera occlusion detection device of the present invention may be provided in the following camera occlusion detection device, where the following camera occlusion detection device invokes a following camera occlusion detection program stored in the memory 1005 through the processor 1001, and executes the following camera occlusion detection method provided by the embodiment of the present invention.
The embodiment of the invention provides a method for detecting occlusion of a following camera, and referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the method for detecting occlusion of the following camera.
In this embodiment, the following camera occlusion detection method includes the following steps:
step S10: and determining the following coordinates of the camera in the current scene according to the real-time coordinates of the target object.
It should be noted that, the execution body of the method of this embodiment may be a terminal device capable of remotely controlling the following camera, which has functions of data processing, data storage, coordinate positioning and program running, or may be a camera itself capable of performing coordinate following and radiation emission, which can implement the method of this embodiment, and this embodiment is not limited thereto, and herein, various embodiments of the following camera occlusion detection method of this invention are described by taking a following camera occlusion detection device (hereinafter referred to as a detection device) as an example.
It should be understood that the target object may be a person or other kind of object (such as an animal, a vehicle, etc.) with a volumetric concept that the camera needs to follow in the virtual reality technology, and this embodiment is not limited thereto, and the embodiments of the present invention will be described herein assuming that the target object to be followed is a person as an example.
It will be appreciated that in the movement of the target object, the following camera is typically located at a position behind the target object, and the position of the following camera is obtained given that the current scene position of the target object is known.
Specifically, in order to embody the following characteristics of the camera and the target object, the coordinates of the camera are conveniently obtained directly from the coordinates of the target object, and step S10 includes:
step S101: and determining a dynamic coordinate system of the current scene based on the target object by taking a real-time plane coordinate point of the target object in the current scene as a coordinate system origin of the current scene and combining a left-hand coordinate system through the moving direction of the target object and the vertical height direction of the target object.
It should be noted that the current scene may be a three-dimensional space in which the target object and the camera are located, and a plurality of movable objects (for example, other objects similar to the target object) and non-movable objects (for example, walls, boundaries, obstacles, and the like) may exist in the three-dimensional space. When the target object moves in the current scene, the target object can follow the picture captured by the camera to serve as a picture scene area of the current scene based on the view angle of the target object, so that the view angle is controlled.
It will be appreciated that the concept of volume exists for the target object, and that in the plane of the current scene, when the person is placed on the ground, the position of the target object, i.e. the ground point on which the sole of the person is located, may be selected as the origin of the coordinate system of the current scene. The person moves in a certain direction, usually the person faces the moving direction in the plane of the current scene, the moving direction facing the person can be selected as a first coordinate axis, the direction in which the vertical height of the person is located, i.e. the direction perpendicular to the plane of the current scene, is selected as a second coordinate axis, and the straight line in which the hands are lifted to the direction in which the shoulders are raised when the person stands upright can be selected as a third coordinate axis.
In a specific implementation, the third coordinate axis may be determined as an X axis, the second coordinate axis may be determined as a Y axis, the first coordinate axis may be determined as a Z axis, and a dynamic three-dimensional coordinate system of the current scene using the target object as an origin may be established to represent a position coordinate of each object (including a movable object and an immovable object) in the current scene, and a relative positional relationship between each object and the target object in the current scene may be reflected by the position coordinate.
Step S102: and determining the following coordinates of the camera in the current scene according to the determined dynamic coordinate system of the current scene.
It can be understood that the problem of solving the following coordinates of the camera in the current scene can be converted into solving the positions of the camera offset axially in the three-dimensional space relative to the target object on the premise of knowing the real-time coordinates of the target object.
In a specific implementation, when the current scene dynamic coordinate system determines the origin based on the target object, if the following camera is located right behind the target object, the following camera is located at the Z-distance coordinate position of the coordinate system and at the Y-axis+height coordinate position of the coordinate system, and because the camera is not offset relative to the X-axis direction of the origin of the coordinate system, namely, is located at the X-axis origin.
Further, when obtaining the distance coordinate and the +height coordinate, the following coordinate of the camera may be calculated through vector operation, and the expression may be: following camera coordinates= (height) target person Y axis unit vector-distance x target person Z axis unit vector) +target person coordinates, wherein the target person coordinates are the origin coordinates of the current scene dynamic coordinate system.
Step S20: and acquiring the moving direction of the target object, and determining the orientation of the camera and a target observation point by combining a preset focusing position.
It should be noted that, after the following coordinates of the current scene of the camera are obtained, the camera is moved along with the moving direction of the target object, and meanwhile, the orientation of the camera needs to be determined so that the lens of the camera focuses on the target position.
It should be understood that a focusing position may be preset on the target object, where the focusing position may be a position on any one target object such as a top of a head, a waist, or a sole of a person, and different focusing positions are different target observation points of the camera, and different view angle pictures presented by different target observation points in the camera are selected, so that the user may set the focusing position based on the personalized requirement.
It will be appreciated that the direction of the line from the point of the camera's position in the current scene where the coordinates follow to the point of this focus position on the target object is the orientation of the camera.
In a specific implementation, the detection device obtains the moving direction of the target object in the current scene, and combines the focusing position on the target object selected by the user in advance as the target observation point of the camera, so that the orientation of the camera can be determined, the view angle picture obtained by the camera when moving along with the target object is based on a single position, and the condition of disordered view angle is avoided.
Step S30: and generating a ray from the camera to the target observation point, and judging whether the ray detects a collision point or not.
It should be noted that, because there is a certain distance between the camera and the target object, in the moving process of the target object, there may be an obstacle between the camera and the target object, so that the camera is blocked when observing along the following position to the direction of the target observation point, and the situation that the target object cannot be observed may occur.
It will be appreciated that the point of the focal position on the target object is pointed by the camera following the point of the position of the coordinates in the current scene as the end point of the ray, and that a ray is directed to the point of the focal position on the target object, which ray can detect the collision point of all obstacles on the line from the point of the camera position to the point of view of the target.
Step S40: and screening out a first shielding point from the detected collision points, and updating the following coordinates of the camera in the current scene based on the first shielding point.
It will be appreciated that since there may be multiple obstructions between the camera and the target object, the collision points at which all obstructions collide with the radiation on the line from the camera location point detected by the transmitted radiation to the target viewpoint may be acquired first.
Specifically, in order to enhance the reliability of the first shielding point obtained by screening, step S40 includes:
step S401: and calculating the distance between each detected collision point and the target observation point.
It will be appreciated that when the rays detect collision points, the coordinates of each collision point may be recorded separately, and the distance from each collision point to the target observation point may be calculated in combination with the coordinates corresponding to the target observation point.
Step S402: and selecting a collision point corresponding to the minimum distance value as the first shielding point.
In a specific implementation, the distance between each collision point and the target observation point is obtained, the collision points are sequentially arranged in the order from small to large according to the distance value, and the collision point at the first position of the arrangement order, namely, the collision point corresponding to the minimum distance value, is selected as the first shielding point.
Step S403: and updating the following coordinates of the camera in the current scene according to the coordinates of the first shielding point.
It can be understood that, considering that if the distance between the first shielding point and the target observation point is relatively short, if the camera with the same volume is directly moved to the front of the first shielding point, collision may occur between the camera and the target object, so as to affect the effective following of the target object by the camera, and the following coordinates of the camera may be updated by taking the coordinates of the first shielding point as a reference factor.
In a specific implementation, the detection device acquires the coordinates of the first shielding point, and because the shielding point, the camera position point and the target observation point are on the same ray, the coordinates of the shielding point are not offset relative to the X-axis direction of the origin of the coordinate system, namely are positioned at the X-axis origin, the detection device acquires the Y-axis coordinates and the X-axis coordinates of the first shielding point, and updates the Y-axis coordinates and the Z-axis coordinates of the camera in the current scene according to the Y-axis coordinates and the Z-axis coordinates of the first shielding point.
Further, a preset empirical value may be introduced, and the following coordinates of the camera updated by the first occlusion point are updated again when the following coordinates of the camera are closer to the coordinates of the target object.
According to the method, the following coordinates of a camera in a current scene are firstly determined according to real-time coordinates of a target object, then the moving direction of the target object is obtained, the direction of the camera and a target observation point are determined by combining a preset focusing position, a ray is generated from the camera to the target observation point, whether the ray detects collision points or not is judged, finally first shielding points are obtained by screening the detected collision points, and the following coordinates of the camera in the current scene are updated based on the first shielding points. The method comprises the steps of firstly determining the initial position and the orientation of a camera through the following coordinates of the camera in a current scene and combining a target object, generating a ray from an observation point on the target object through the camera to perform collision detection, updating the coordinates of the camera according to a first shielding point obtained by screening, further, taking a real-time plane coordinate point of the target object in the current scene as a coordinate system origin of the current scene, and determining a dynamic coordinate system of the current scene based on the target object through the moving direction of the target object and the vertical height direction of the target object and combining a left-hand coordinate system, so that the following characteristics of the camera and the target object can be embodied, and the coordinates of the camera can be obtained directly from the coordinates of the target object conveniently; and calculating the distance between each detected collision point and the target observation point, selecting the collision point corresponding to the minimum distance value as the first shielding point, updating the following coordinates of the camera in the current scene according to the coordinates of the first shielding point, and considering the situation that a plurality of obstacles exist between the camera and the target object.
Referring to fig. 3, fig. 3 is a flowchart illustrating a second embodiment of a method for detecting occlusion following a camera according to the present invention.
Based on the above-described first embodiment, when determining the first occlusion point, in addition to this judgment factor of the distance between the collision point and the camera target viewpoint, the front-rear positional relationship between the collision point and the target object is considered, step S402 includes:
step S4021: and obtaining the distance between each collision point and the target observation point.
In a specific implementation, the detection device acquires the distance from each collision point to the target observation point, and sequentially arranges the collision points according to the distance value from small to large.
Step S4022: and judging the relative position of the collision point corresponding to the first minimum value and the target object.
The collision point corresponding to the first minimum value of the distance is the collision point located at the first position in the arrangement order.
It will be appreciated that since a ray is generated from the camera toward the target viewpoint, the collision point detected by the ray may be a collision point located in front of the moving direction of the target object. The relative position of the collision point and the target object can be determined based on the coordinates of the detected collision point.
In a specific implementation, the detection device acquires the coordinate of the collision point corresponding to the first minimum value, and determines that the collision point is before the target object when the value of the Z-axis coordinate of the collision point is positive and determines that the collision point is after the target object when the value of the Z-axis coordinate of the collision point is negative because the left-hand coordinate system established in combination with the moving direction of the target object is in the current scene.
Step S4023: and judging the relative position of the collision point corresponding to the second minimum value and the target object when the collision point corresponding to the first minimum value is positioned in front of the target object based on the moving direction.
When the collision point corresponding to the first minimum value is located before the target object, the collision point corresponding to the first minimum value is not located between the camera and the target object, the camera is not blocked, and the collision point which can block the camera needs to be further judged.
In a specific implementation, before determining that the collision point corresponding to the first minimum value is located at the target object, the detection device may select the collision point corresponding to the second minimum value according to the arrangement sequence based on the distance values and then determine the relative position of the collision point and the target object when the camera is not blocked.
Step S4024: and based on the moving direction, when the collision point corresponding to the second minimum distance value is located behind the target object, taking the collision point corresponding to the second minimum distance value as the first shielding point.
It will be appreciated that when the collision point corresponding to the second minimum distance value is located behind the target object, the collision point corresponding to the second minimum distance value is located between the camera and the target object, which may block the camera.
Further, if the collision point corresponding to the second maximum value is still located before the target object, the collision points can be sequentially selected according to the arrangement sequence based on the distance values to judge the relative positions of the collision points and the target object until the selected collision point is located behind the target object, and the collision point with the smaller distance value located behind the target object is selected as the first shielding point.
Furthermore, when the distance between each collision point and the target observation point is acquired, coordinate information of all the collision points is acquired, the collision points with positive values of the Z-axis coordinates are directly removed, all the collision points behind the target object are acquired, and then the collision point corresponding to the minimum distance value in all the collision points behind the target object is selected as a first shielding point for updating the following coordinates of the camera.
In a specific implementation, before determining that the collision point corresponding to the second minimum value is located at the target object, the detection device takes the collision point corresponding to the second minimum value as the first shielding point and acquires the coordinates of the collision point corresponding to the second minimum value when the camera is not shielded. The method can be combined with the characteristic of ray detection, and the relative position relation between the collision point detected by rays and the target object is considered, so that the collision point positioned in front of the target object is filtered, and the situation that the camera passes through the target object when the camera moves according to the collision point positioned in front of the target object is avoided.
Further, considering the difference of the target objects, different focusing positions need to be selected to determine the target viewpoint of the camera to obtain a better viewing angle effect, step S20 includes:
step S201: and acquiring the moving direction of the target object, and taking the moving direction as the moving direction of the camera.
It will be appreciated that since the camera needs to follow the target object, the direction of movement of the target object can be taken as the direction of movement of the camera, and the orientation of the camera can be determined initially.
Step S202: and selecting a position which is distant from the origin of the coordinate system by a preset value in the vertical height direction of the target object as a preset focusing position, and determining the target observation point.
It will be appreciated that in the current scenario, the target object has a volume and thus a vertical height, which is different, the vertical height value is also different. The position of the preset value from the origin of the coordinate system can be selected as the preset focusing position based on the vertical height value of the target object. The setting of the preset value may be a value set by the user based on different target objects, for example: when the target object is a person, selecting a value close to the vertical height value of the person, namely the height of the person, as a preset value so as to simulate the observation view angle of the person based on eyes; when the target object is a vehicle, a half value of the height value of the vehicle can be selected as a preset value; the preset value can also be obtained by directly multiplying the vertical height of the target object by a percentage empirical value when the vertical height of the target object is obtained, and the selection of the preset value is not limited in this embodiment.
Step S203: and determining the orientation of the camera by combining the moving direction of the camera with the connecting line direction of the camera and the focusing position.
In a specific implementation, after the direction of the camera is primarily determined by the moving direction of the camera, the direction of the camera is further determined by the direction of a connecting line from the camera to a preset focusing position on the target object, so that the camera points to the same observation point when moving along with the target object, and the disorder of the visual angle is avoided.
According to the method, the relative positions of the collision points corresponding to the first minimum value and the target object are judged by acquiring the distances between the collision points and the target observation point, based on the moving direction, when the collision point corresponding to the first minimum value is located before the target object, the relative positions of the collision point corresponding to the second minimum value and the target object are judged, and based on the moving direction, when the collision point corresponding to the second minimum value is located behind the target object, the collision point corresponding to the second minimum value is taken as the first shielding point, so that the situation that a camera passes through the target object to be observed can be avoided in consideration of the relative positions of different collision points and the target object in the moving direction of the target object; the moving direction of the target object is obtained, the moving direction is taken as the moving direction of the camera, the position which is away from the origin of the coordinate system by a preset value is selected in the vertical height direction of the target object and is used as a preset focusing position, the target observation point is determined, the direction of the camera is determined by combining the moving direction of the camera with the connecting line direction of the camera and the preset focusing position, the focusing positions can be preset in consideration of the characteristics of different target objects, the determined direction of the camera is more reliable, and the camera is beneficial to pointing to the same observation point when moving along with the target object, so that the disorder of visual angles is avoided.
Referring to fig. 4, fig. 4 is a flowchart illustrating a third embodiment of a following camera occlusion detection method according to the present invention.
Based on the above embodiment, considering that the first occlusion point coordinate is directly taken as the following coordinate of the camera in the current scene, possibly when the occlusion point is very close to the target object, the situation that the camera passes through the mold may occur, the observation effect of the camera is affected, and the following coordinate of the camera may be calculated by taking the coordinate of the first occlusion point as a parameter, step S403 includes:
step S4031: and acquiring a first distance value between the initial following point of the camera and the target observation point.
It should be noted that, the first distance value between the initial following point of the camera and the target observation point can be calculated by the following coordinate of the ray machine in the current scene and the coordinate of the target observation point in the current scene. The calculated first distance value is an ideal distance when no shielding object exists between the camera and the target object.
Step S4032: and taking the distance value between the first shielding point and the target observation point as a second distance value, and determining a following coordinate influence parameter by combining the first distance value.
It is understood that the second distance value may be a value calculated by the coordinates of the first occlusion point and the coordinates of the target observation point.
Referring to fig. 5, fig. 5 is a diagram illustrating a ray detection example in an x-y-z coordinate system of a current scene established based on a Target object in a third embodiment of a method for detecting occlusion of a following Camera according to the present invention, where Camera is a following Camera, and Hit1, 2, and 3 are collision points detected by rays generated from Camera to a Target observation point Target, and Hit2 can be obtained by screening as a first occlusion point for updating Camera coordinates.
Wherein the initial following coordinate of the Camera is camera_ini_position (0, height, -distance); the collision point Hit1 coordinates are Hit1_position (0, height_hit1, -distance_hit1), the collision point Hit2 coordinates are Hit2_position (0, height_hit2, -distance_hit2), and the collision point Hit3 coordinates are Hit3_position (0, height_hit3, -distance_hit3); the Target viewpoint coordinates are target_position (0, height_target, 0).
In specific implementation, a first distance value is obtained through calculation of initial following point coordinates of a camera and target observation point coordinates; calculating the distance between the coordinates of each collision point (Hit 1, 2 and 3) and the coordinates of the target observation point, and selecting a collision point corresponding to the minimum distance, namely Hit2, as a first shielding point; then calculating a second distance value through the determined first shielding point Hit2 coordinate and the target observation point coordinate; finally, a following coordinate influencing parameter K is determined from the first distance value and the second distance value, the following coordinate influencing parameter K being a percentage calculated from the first distance value and the second distance value (k=second distance value/first distance value).
Step S4033: and updating the following coordinates of the camera in the current scene based on the following coordinate influence parameters.
In a specific implementation, when the following coordinate influence factor K is obtained, the following coordinate of the updated Camera in the current scene may be camera_position= (0, height_hit2, -distance K/0.3 f), where 0.3f may be an empirical value, so as to avoid that the Camera is too close to the following target.
Further, when determining the coordinates of the target object and obtaining the following coordinates of the camera in the current scene so that the camera can quickly follow the movement of the target object, after step S10, the method further includes:
step S10': and when the following coordinates of the camera in the current scene are obtained, moving the camera to the position corresponding to the following coordinates in the current scene in an interpolation mode, and determining the initial following point of the camera.
It can be understood that, in the three-dimensional space, the camera moves along with the target object, and the implementation thought can be as follows: and decomposing the motion of the camera into two sub-motions of translating the position of the camera and rotating the position of the camera by taking the position corresponding to the following coordinates in the current scene as a reference point and using the motion decomposition idea. Then in each part of motion, the translation and rotation of the camera are realized frame by frame in an interpolation mode, so that the aim of following the movement of a target object is fulfilled.
In a specific implementation, when the following coordinates of the camera in the current scene are determined by the coordinates of the target object, the detection device can move the camera to a position corresponding to the following coordinates in the current scene in real time in an interpolation mode, and the position is an initial following point of the camera.
According to the method, the first distance value between the initial following point of the camera and the target observation point is obtained, the distance value between the first shielding point and the target observation point is used as the second distance value, the following coordinate influence parameter is determined by combining the first distance value, the following coordinate influence parameter is updated on the basis of the following coordinate influence parameter, in order to avoid the mold penetration condition that the distance between the barrier and the target object is too close, the camera is not directly moved to the determined first shielding point, but the distance between the first shielding point and the target observation point is calculated, the distance is used as a factor for calculating the final position of the camera to participate in calculation, the following movement of the camera is realized in an interpolation mode, and the following camera can automatically approach the target object to be observed and the mold penetration condition cannot be caused by too close distance when the following camera detects that the shielding object is located between the camera and the target object.
In addition, the embodiment of the invention also provides a storage medium, wherein the storage medium is stored with a following camera occlusion detection program, and the following camera occlusion detection program realizes the steps of the following camera occlusion detection method when being executed by a processor.
Referring to fig. 6, fig. 6 is a block diagram showing the structure of a first embodiment of the following camera occlusion detection device of the present invention.
As shown in fig. 6, the following camera occlusion detection device of the present invention includes:
the coordinate determining module 601 is configured to determine a following coordinate of the camera in the current scene according to a real-time coordinate of the target object;
the orientation determining module 602 is configured to obtain a moving direction of the target object, and determine an orientation of the camera and a target viewpoint in combination with a preset focusing position;
a collision detection module 603, configured to generate a ray from the camera to the target observation point, and determine whether the ray detects a collision point;
and the coordinate updating module 604 is used for screening out a first shielding point from the detected collision points, and updating the following coordinates of the camera in the current scene based on the first shielding point.
Firstly, determining the following coordinates of a camera in a current scene according to real-time coordinates of a target object; then, the moving direction of the target object is obtained, and the direction of the camera and a target observation point are determined by combining a preset focusing position; generating a ray from the camera to the target observation point, and judging whether the ray detects a collision point or not; and finally, screening out a first shielding point from the detected collision points, and updating the following coordinates of the camera in the current scene based on the first shielding point. According to the method and the device, the initial position and the direction of the camera are determined by combining the following coordinates of the camera in the current scene with the target object, then a ray is generated from the observation point on the target object through the camera to perform collision detection, and the coordinates of the camera are updated according to the first shielding points obtained through screening.
Based on the first embodiment of the following camera shielding detection device of the present invention, a second embodiment of the following camera shielding detection device of the present invention is provided.
In this embodiment, the coordinate updating module 604 is configured to calculate a distance between each detected collision point and the target observation point; selecting a collision point corresponding to the minimum distance value as the first shielding point; and updating the following coordinates of the camera in the current scene according to the coordinates of the first shielding point.
Further, the coordinate updating module 604 is further configured to obtain a distance between each collision point and the target observation point; judging the relative position of the collision point corresponding to the first minimum value and the target object; based on the moving direction, judging the relative position of the collision point corresponding to the second minimum value and the target object when the collision point corresponding to the first minimum value is positioned in front of the target object; and based on the moving direction, when the collision point corresponding to the second minimum distance value is located behind the target object, taking the collision point corresponding to the second minimum distance value as the first shielding point.
The coordinate determining module 601 is configured to determine a dynamic coordinate system of the current scene based on the target object by using a real-time plane coordinate point of the target object in the current scene as an origin of the coordinate system of the current scene and combining a left-hand coordinate system with a moving direction of the target object and a vertical height direction of the target object; and determining the following coordinates of the camera in the current scene according to the determined dynamic coordinate system of the current scene.
Further, the coordinate determining module 601 is further configured to obtain a moving direction of the target object, and take the moving direction as a moving direction of the camera; selecting a position which is a preset value from the origin of the coordinate system in the vertical height direction of the target object as a preset focusing position, and determining the target observation point; and determining the orientation of the camera by combining the moving direction of the camera with the connecting line direction of the camera and the preset focusing position.
Further, the coordinate updating module 604 is further configured to obtain a first distance value between an initial following point of the camera and the target observation point; taking the distance value between the first shielding point and the target observation point as a second distance value, and determining a following coordinate influence parameter by combining the first distance value; and updating the following coordinates of the camera in the current scene based on the following coordinate influence parameters.
Further, the coordinate determining module 601 is further configured to, when obtaining the following coordinate of the camera in the current scene, move the camera to a position corresponding to the following coordinate in the current scene by interpolation, and determine an initial following point of the camera.
Other embodiments or specific implementation manners of the following camera shielding detection device of the present invention may refer to the above method embodiments, and are not described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. read-only memory/random-access memory, magnetic disk, optical disk), comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. A method of detecting occlusion of a follower camera, the method comprising:
determining the following coordinates of the camera in the current scene according to the real-time coordinates of the target object;
acquiring the moving direction of the target object, and determining the direction of the camera and a target observation point by combining a preset focusing position;
generating a ray from the camera to the target observation point, and judging whether the ray detects a collision point or not;
and screening out a first shielding point from the detected collision points, and updating the following coordinates of the camera in the current scene based on the first shielding point.
2. The method for detecting occlusion of a following camera according to claim 1, wherein the step of screening out a first occlusion point from the detected collision points, and updating the following coordinates of the camera in the current scene based on the first occlusion point, comprises:
Calculating the distance between each detected collision point and the target observation point;
selecting a collision point corresponding to the minimum distance value as the first shielding point;
and updating the following coordinates of the camera in the current scene according to the coordinates of the first shielding point.
3. The following camera occlusion detection method of claim 2, wherein selecting the collision point corresponding to the minimum distance value as the first occlusion point includes:
obtaining the distance between each collision point and the target observation point;
judging the relative position of the collision point corresponding to the first minimum value and the target object;
based on the moving direction, judging the relative position of the collision point corresponding to the second minimum value and the target object when the collision point corresponding to the first minimum value is positioned in front of the target object;
and based on the moving direction, when the collision point corresponding to the second minimum distance value is located behind the target object, taking the collision point corresponding to the second minimum distance value as the first shielding point.
4. The method for detecting occlusion of a following camera according to claim 1, wherein determining the following coordinates of the camera in the current scene according to the real-time coordinates of the target object comprises:
Taking a real-time plane coordinate point of the target object in the current scene as a coordinate system origin of the current scene, and determining a dynamic coordinate system of the current scene based on the target object by combining a left-hand coordinate system through the moving direction of the target object and the vertical height direction of the target object;
and determining the following coordinates of the camera in the current scene according to the determined dynamic coordinate system of the current scene.
5. The method for detecting occlusion of a following camera of claim 4, wherein the acquiring the moving direction of the target object and determining the orientation of the camera and the target viewpoint in combination with a preset focus position includes:
acquiring a moving direction of the target object, and taking the moving direction as the moving direction of the camera;
selecting a position which is a preset value from the origin of the coordinate system in the vertical height direction of the target object as a preset focusing position, and determining the target observation point;
and determining the orientation of the camera by combining the moving direction of the camera with the connecting line direction of the camera and the preset focusing position.
6. The method for detecting occlusion of a following camera according to claim 2, wherein updating the following coordinates of the camera in the current scene according to the coordinates of the first occlusion point comprises:
Acquiring a first distance value between an initial following point of the camera and the target observation point;
taking the distance value between the first shielding point and the target observation point as a second distance value, and determining a following coordinate influence parameter by combining the first distance value;
and updating the following coordinates of the camera in the current scene based on the following coordinate influence parameters.
7. The following camera occlusion detection method of any of claims 1-6, wherein after the step of determining the following coordinates of the camera in the current scene from the real-time coordinates of the target object, further comprising:
and when the following coordinates of the camera in the current scene are obtained, moving the camera to the position corresponding to the following coordinates in the current scene in an interpolation mode, and determining the initial following point of the camera.
8. A following camera occlusion detection device, the following camera occlusion detection device comprising:
the coordinate determining module is used for determining the following coordinates of the camera in the current scene according to the real-time coordinates of the target object;
the orientation determining module is used for acquiring the moving direction of the target object and determining the orientation of the camera and a target observation point by combining a preset focusing position;
The collision detection module is used for generating a ray from the camera to the target observation point and judging whether the ray detects a collision point or not;
and the coordinate updating module is used for screening out a first shielding point from the detected collision points, and updating the following coordinates of the camera in the current scene based on the first shielding point.
9. A following camera occlusion detection device, the following camera occlusion detection device comprising: a memory, a processor and a following camera occlusion detection program stored on the memory and executable on the processor, the following camera occlusion detection program being configured to implement the steps of the following camera occlusion detection method of any of claims 1 to 7.
10. A storage medium having stored thereon a following camera occlusion detection program which, when executed by a processor, implements the steps of the following camera occlusion detection method of any of claims 1 to 7.
CN202311404460.6A 2023-10-27 2023-10-27 Method, device, equipment and storage medium for detecting occlusion of following camera Active CN117132624B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311404460.6A CN117132624B (en) 2023-10-27 2023-10-27 Method, device, equipment and storage medium for detecting occlusion of following camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311404460.6A CN117132624B (en) 2023-10-27 2023-10-27 Method, device, equipment and storage medium for detecting occlusion of following camera

Publications (2)

Publication Number Publication Date
CN117132624A true CN117132624A (en) 2023-11-28
CN117132624B CN117132624B (en) 2024-01-30

Family

ID=88854968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311404460.6A Active CN117132624B (en) 2023-10-27 2023-10-27 Method, device, equipment and storage medium for detecting occlusion of following camera

Country Status (1)

Country Link
CN (1) CN117132624B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109272527A (en) * 2018-09-03 2019-01-25 中国人民解放军国防科技大学 Tracking control method and device for random moving target in three-dimensional scene
CN110290351A (en) * 2019-06-26 2019-09-27 广东康云科技有限公司 A kind of video target tracking method, system, device and storage medium
CN110704914A (en) * 2019-09-20 2020-01-17 同济大学建筑设计研究院(集团)有限公司 Sight line analysis method and device, computer equipment and storage medium
CN110929639A (en) * 2019-11-20 2020-03-27 北京百度网讯科技有限公司 Method, apparatus, device and medium for determining position of obstacle in image
CN112507799A (en) * 2020-11-13 2021-03-16 幻蝎科技(武汉)有限公司 Image identification method based on eye movement fixation point guidance, MR glasses and medium
CN113610984A (en) * 2021-06-16 2021-11-05 南京邮电大学 Hololens2 holographic glasses-based augmented reality method
CN115317916A (en) * 2022-06-23 2022-11-11 网易(杭州)网络有限公司 Method and device for detecting overlapped objects in virtual scene and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109272527A (en) * 2018-09-03 2019-01-25 中国人民解放军国防科技大学 Tracking control method and device for random moving target in three-dimensional scene
CN110290351A (en) * 2019-06-26 2019-09-27 广东康云科技有限公司 A kind of video target tracking method, system, device and storage medium
CN110704914A (en) * 2019-09-20 2020-01-17 同济大学建筑设计研究院(集团)有限公司 Sight line analysis method and device, computer equipment and storage medium
CN110929639A (en) * 2019-11-20 2020-03-27 北京百度网讯科技有限公司 Method, apparatus, device and medium for determining position of obstacle in image
CN112507799A (en) * 2020-11-13 2021-03-16 幻蝎科技(武汉)有限公司 Image identification method based on eye movement fixation point guidance, MR glasses and medium
CN113610984A (en) * 2021-06-16 2021-11-05 南京邮电大学 Hololens2 holographic glasses-based augmented reality method
CN115317916A (en) * 2022-06-23 2022-11-11 网易(杭州)网络有限公司 Method and device for detecting overlapped objects in virtual scene and electronic equipment

Also Published As

Publication number Publication date
CN117132624B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
US10764626B2 (en) Method and apparatus for presenting and controlling panoramic image, and storage medium
CN110362193B (en) Target tracking method and system assisted by hand or eye tracking
US20220122331A1 (en) Interactive method and system based on augmented reality device, electronic device, and computer readable medium
TWI649675B (en) Display device
JP6681352B2 (en) Information processing system, information processing program, information processing device, information processing method, game system, game program, game device, and game method
KR20160147495A (en) Apparatus for controlling interactive contents and method thereof
US20140354631A1 (en) Non-transitory storage medium encoded with computer readable information processing program, information processing apparatus, information processing system, and information processing method
CN111078018A (en) Touch control method of display, terminal device and storage medium
CN111784844B (en) Method and device for observing virtual object, storage medium and electronic equipment
CN108629799B (en) Method and equipment for realizing augmented reality
WO2019091117A1 (en) Robotic 3d scanning systems and scanning methods
CN111275801A (en) Three-dimensional picture rendering method and device
CN110968194A (en) Interactive object driving method, device, equipment and storage medium
CN112465911A (en) Image processing method and device
CN110286906B (en) User interface display method and device, storage medium and mobile terminal
CN117132624B (en) Method, device, equipment and storage medium for detecting occlusion of following camera
CN113870213A (en) Image display method, image display device, storage medium, and electronic apparatus
AU2010338191B2 (en) Stabilisation method and computer system
CN107952240B (en) Game control method and device realized by using selfie stick and computing equipment
CN114758105A (en) Collision prompt method, collision prevention device and computer readable storage medium
KR101473234B1 (en) Method and system for displaying an image based on body tracking
CN111475026A (en) Space positioning method based on mobile terminal application augmented virtual reality technology
US11942008B2 (en) Smart tracking-based projection method and system
CN114070956B (en) Image fusion processing method, system, equipment and computer readable storage medium
CN115348438B (en) Control method and related device for three-dimensional display equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant