WO2022198822A1 - 摄像头朝向标定方法、装置、设备、存储介质及程序 - Google Patents

摄像头朝向标定方法、装置、设备、存储介质及程序 Download PDF

Info

Publication number
WO2022198822A1
WO2022198822A1 PCT/CN2021/102931 CN2021102931W WO2022198822A1 WO 2022198822 A1 WO2022198822 A1 WO 2022198822A1 CN 2021102931 W CN2021102931 W CN 2021102931W WO 2022198822 A1 WO2022198822 A1 WO 2022198822A1
Authority
WO
WIPO (PCT)
Prior art keywords
orientation
image
camera
angle
calibrated
Prior art date
Application number
PCT/CN2021/102931
Other languages
English (en)
French (fr)
Inventor
王露
朱烽
赵瑞
Original Assignee
深圳市商汤科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市商汤科技有限公司 filed Critical 深圳市商汤科技有限公司
Publication of WO2022198822A1 publication Critical patent/WO2022198822A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present application relates to the technical field of orientation calibration, and in particular, to a camera orientation calibration method, apparatus, device, storage medium and program.
  • Embodiments of the present application provide a camera orientation calibration method, apparatus, device, storage medium, and program.
  • the embodiment of the present application provides a camera orientation calibration method, the method is performed by an electronic device, and the method includes:
  • a second orientation of the first camera to be calibrated is obtained, where the first camera to be calibrated is the camera that captures the first image.
  • the first orientation includes: the front of the target object faces the first camera to be calibrated or the back of the target object faces the first camera to be calibrated, and the first orientation is based on the The first orientation and the moving direction are used to obtain the second orientation of the first camera to be calibrated, including:
  • the second orientation is the same as the moving direction.
  • the first orientation includes two cases: the camera orientation calibration device determines the first orientation based on the reference angle of the first orientation, so that the orientation of the first camera to be calibrated can be determined based on the first orientation and the moving direction , reduce the amount of data processing.
  • the obtaining, based on the image sequence, the moving direction of the target object during the acquisition process of the image sequence includes:
  • map data includes a target road
  • target road is the road on which the target object moves
  • the moving direction of the target object is obtained.
  • the target road is the road on which the target object moves.
  • the camera orientation calibration device can obtain the information of the target road based on the map data.
  • the camera orientation calibration device obtains the trajectory of the target object in the image sequence based on the position of the target object in the image sequence and the acquisition time of the image sequence.
  • the camera orientation calibration device can obtain the moving direction of the target object in the image sequence (hereinafter referred to as the reference moving direction) based on the trajectory. Determine the angle between the reference moving direction and the direction of the target road, and obtain the moving direction of the target object based on the angle and the direction of the target road
  • the images in the image sequence are acquired by the first camera to be calibrated
  • the obtaining the first trajectory of the target object based on the image sequence includes:
  • a trajectory of the target object in the image sequence is obtained as the first trajectory ;
  • the obtaining the moving direction of the target object based on the first trajectory and the target road includes:
  • the moving direction of the target object is obtained.
  • the image sequence includes a first image subsequence and a second image subsequence, the images in the first image subsequence are acquired by the first camera to be calibrated, and the second image subsequence is acquired by the camera to be calibrated. The images in the image subsequence are acquired by the second camera to be calibrated;
  • the obtaining the first trajectory of the target object based on the image sequence includes:
  • the trajectory of the target object in the real world is taken as the first trajectory.
  • the direction of the target road includes a first direction and a second direction
  • the obtaining the moving direction of the target object based on the first trajectory and the target road includes:
  • determining that the moving direction is the first direction
  • the moving direction is determined to be the second direction.
  • the first image is an image with the largest time stamp in the image sequence.
  • the image sequence further includes a second image different from the first image, the second image is acquired by the first camera to be calibrated, and the method further includes:
  • the camera orientation calibration device first determines the fourth orientation based on the second image, and then obtains the fifth orientation based on the second orientation and the fourth orientation, which can improve the orientation accuracy of the first camera to be calibrated.
  • obtaining the fifth orientation of the first camera to be calibrated based on the second orientation and the fourth orientation includes:
  • the second orientation and the fourth orientation are weighted and averaged to obtain the fifth orientation.
  • the obtaining the first weight of the second orientation includes:
  • the orientation of the first target orientation is the same as the orientation of the second orientation
  • the first weight is obtained based on the first quantity, and the first weight and the first quantity are positively correlated.
  • the obtaining the second weight of the fourth orientation includes:
  • the orientation of the second target orientation is the same as the orientation of the fourth orientation
  • the second weight is obtained based on the second quantity, and the second weight and the second quantity are positively correlated.
  • the Methods before the second orientation and the fourth orientation are weighted and averaged based on the first weight and the second weight to obtain the fifth orientation, the Methods also include:
  • the weighted average of the second orientation and the fourth orientation based on the first weight and the second weight to obtain the fifth orientation including:
  • a weighted average of the first angle and the second angle is performed to obtain a third angle, which is used as the fifth orientation.
  • the camera orientation calibration apparatus maps the second orientation and the fourth orientation to the first angle and the second angle respectively based on the mapping relationship.
  • the third angle is obtained by weighted averaging of the first angle and the second angle, and the third angle is used as the fifth orientation, so as to improve the accuracy of the orientation of the camera to be marked.
  • the weighted average of the first angle and the second angle based on the first weight and the second weight to obtain a third angle includes:
  • the first angle is mapped to the first point on the reference circle, the first angle is the same as the second angle, and the second angle is the angle between the first vector and the coordinate axis of the rectangular coordinate system , the first vector is a vector with the center of the reference circle pointing to the first point, and the reference circle is in the Cartesian coordinate system;
  • the second angle is mapped to the second point on the reference circle, the second angle is the same as the third angle, and the third angle is the angle between the second vector and the coordinate axis, so
  • the second vector is a vector with the center of the circle pointing to the second point;
  • a weighted average is performed on the coordinates of the first point and the coordinates of the second point to obtain a third point;
  • the angle between the third vector and the coordinate axis is determined to obtain the third angle, where the third vector is a vector with the center of the circle pointing to the third point.
  • the camera facing the calibration device can reduce the probability of occurrence of the above error.
  • the first camera to be calibrated includes a camera.
  • the embodiment of the present application provides a camera orientation calibration device, and the device includes:
  • an acquisition unit configured to acquire an image sequence of the target object, the image sequence including the first image
  • a first processing unit configured to obtain, based on the image sequence, the moving direction of the target object during the acquisition process of the image sequence
  • a second processing unit configured to determine a first orientation of the target object in the first image
  • the third processing unit is configured to obtain the second orientation of the first camera to be calibrated based on the first orientation and the moving direction, where the first camera to be calibrated is the camera that captures the first image.
  • the first orientation includes: the front of the target object faces the first camera to be calibrated or the back of the target object faces the first camera to be calibrated, and the third process unit, configured as:
  • the second orientation is the same as the moving direction.
  • the third processing unit is configured as:
  • map data includes a target road
  • target road is the road on which the target object moves
  • the moving direction of the target object is obtained.
  • the images in the image sequence are acquired by the first camera to be calibrated
  • the third processing unit is configured as:
  • a trajectory of the target object in the image sequence is obtained as the first trajectory ;
  • the moving direction of the target object is obtained.
  • the image sequence includes a first image subsequence and a second image subsequence, the images in the first image subsequence are acquired by the first camera to be calibrated, and the second image subsequence is acquired by the camera to be calibrated. The images in the image subsequence are acquired by the second camera to be calibrated;
  • the third processing unit is configured as:
  • the acquisition time of the images in the first image subsequence, the acquisition time of the images in the second image subsequence, the position of the first camera to be calibrated, and the position of the second camera to be calibrated obtain the trajectory of the target object in the real world, as the first trajectory;
  • the direction of the target road includes a first direction and a second direction
  • the third processing unit is configured as:
  • determining that the moving direction is the first direction
  • the moving direction is determined to be the second direction.
  • the first image is an image with the largest time stamp in the image sequence.
  • the image sequence further includes a second image different from the first image, the second image is acquired by the first camera to be calibrated, and the third processing unit is further configured for:
  • the third processing unit is further configured to:
  • the second orientation and the fourth orientation are weighted and averaged to obtain the fifth orientation.
  • the third processing unit is further configured to:
  • the orientation of the first target orientation is the same as the orientation of the second orientation
  • the first weight is obtained based on the first quantity, and the first weight and the first quantity are positively correlated.
  • the third processing unit is further configured to:
  • the orientation of the second target orientation is the same as the orientation of the fourth orientation
  • the second weight is obtained based on the second quantity, and the second weight and the second quantity are positively correlated.
  • the obtaining unit is further configured to: perform a weighted average on the second orientation and the fourth orientation based on the first weight and the second weight, to obtain Before the fifth orientation, obtain the mapping relationship between the orientation and the direction angle;
  • the third processing unit is further configured to:
  • a weighted average of the first angle and the second angle is performed to obtain a third angle, which is used as the fifth orientation.
  • the third processing unit is further configured to:
  • the first angle is mapped to the first point on the reference circle, the first angle is the same as the second angle, and the second angle is the angle between the first vector and the coordinate axis of the rectangular coordinate system , the first vector is a vector with the center of the reference circle pointing to the first point, and the reference circle is in the Cartesian coordinate system;
  • the second angle is mapped to the second point on the reference circle, the second angle is the same as the third angle, and the third angle is the angle between the second vector and the coordinate axis, so
  • the second vector is a vector with the center of the circle pointing to the second point;
  • a weighted average is performed on the coordinates of the first point and the coordinates of the second point to obtain a third point;
  • the angle between the third vector and the coordinate axis is determined to obtain the third angle, where the third vector is a vector with the center of the circle pointing to the third point.
  • the first camera to be calibrated includes a camera.
  • An embodiment of the present application provides an electronic device, which is characterized by comprising: a processor and a memory, where the memory is used to store computer program code, and the computer program code includes computer instructions, and when the processor executes the computer In the case of an instruction, the electronic device executes the method according to the above-mentioned first aspect and any possible implementation manner thereof.
  • the embodiments of the present application provide another electronic device, including: a processor, a sending device, an input device, an output device, and a memory, where the memory is used to store computer program codes, where the computer program codes include computer instructions, and in the When the processor executes the computer instructions, the electronic device executes the method according to the first aspect and any one of possible implementations thereof.
  • An embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and the computer program includes program instructions, and when the program instructions are executed by a processor, all The processor executes the method as described above in the first aspect and any possible implementation manner thereof.
  • An embodiment of the present application provides a computer program product, where the computer program product includes a computer program or an instruction, and when the computer program or instruction is run on a computer, the computer is made to execute the first aspect and any of the above-mentioned first aspect.
  • the computer program product includes a computer program or an instruction
  • the computer program or instruction is run on a computer, the computer is made to execute the first aspect and any of the above-mentioned first aspect.
  • FIG. 1A is a schematic diagram of a target axis provided by an embodiment of the present application.
  • FIG. 1B is a schematic diagram of a system architecture of a camera orientation calibration method provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of another target axis provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a pixel coordinate system provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a method for calibrating a camera orientation according to an embodiment of the present application
  • FIG. 5 is a schematic diagram of a direction coordinate system provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a camera orientation calibration device according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a hardware structure of an apparatus for calibrating a camera orientation according to an embodiment of the present application.
  • At least one (item) refers to one or more
  • multiple refers to two or more
  • at least two (item) refers to two or more
  • three or more, "and/or” is used to describe the association relationship of related objects, indicating that three kinds of relationships can exist, for example, "A and/or B” can mean: only A exists, only B exists, and at the same time There are three cases of A and B, where A and B can be singular or plural.
  • the character “/" can indicate that the related objects are an "or” relationship, which refers to any combination of these items, including any combination of a single item (a) or a plurality of items (a).
  • At least one (a) of a, b or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c” ", where a, b, c can be single or multiple.
  • the traditional method determines the orientation of the camera through manual calibration.
  • this method requires huge labor cost and time cost.
  • the embodiments of the present application provide a technical solution for camera orientation calibration, so as to complete the camera orientation calibration without manual intervention.
  • Target axis Viewed from the front of the target object, the symmetry axis of the target object.
  • the target axis is the vertical axis of the human body.
  • the target axis is the symmetry axis of the vehicle viewed from the front of the vehicle.
  • the orientation of the target object in the image is the angle between the shooting direction of the imaging device that captures the image and the target axis (hereinafter referred to as the reference angle), and looking from above the target object from top to bottom, the clockwise direction is Refers to the positive direction of the included angle.
  • the position in the image refers to the position under the pixel coordinates of the image.
  • the abscissa of the pixel coordinate system is used to represent the number of columns where the pixel points are located
  • the ordinate in the pixel coordinate system is used to represent the number of rows where the pixel points are located.
  • the pixel coordinates are constructed with the upper left corner of the image as the coordinate origin O, the direction of the row parallel to the image as the direction of the X-axis, and the direction of the column parallel to the image as the direction of the Y-axis
  • the system is XOY.
  • the units of the abscissa and ordinate are pixels.
  • the coordinates of pixel point A 11 in FIG. 3 are (1, 1)
  • the coordinates of pixel point A 23 are (3, 2)
  • the coordinates of pixel point A 42 are (2, 4)
  • the coordinates of pixel point A 34 are (2, 4).
  • the coordinates are (4, 3).
  • the execution subject of the embodiments of the present application is a camera orientation calibration device, where the camera orientation calibration device may be any electronic device that can execute the technical solutions disclosed in the method embodiments of the present application.
  • the camera orientation calibration device may be one of the following: a mobile phone, a computer, a tablet computer, and a wearable smart device.
  • FIG. 4 is a schematic flowchart of a camera orientation calibration method provided by an embodiment of the present application.
  • the target object may be any object.
  • the target object includes one of the following: a human body, a human face, and a vehicle.
  • the first camera to be calibrated may be any imaging device.
  • the first camera to be calibrated may be a camera on the terminal.
  • the first camera to be calibrated may be a camera.
  • the image sequence includes at least one image, and the images in the image sequence all include the target object, and the images in the image sequence are all collected by the first camera to be calibrated.
  • At least one image in the image sequence is arranged in the order of acquisition time.
  • the image sequence includes image a, image b, and image c, wherein the acquisition time of image a is earlier than the acquisition time of image c, and the acquisition time of image c is earlier than the acquisition time of image b.
  • the arrangement order of image a, image b and image c in the image sequence is: image a, image c, image b.
  • all images in the image sequence may be collected by the first camera to be labeled, or some images in the image sequence may be collected by the first camera to be labeled.
  • the image sequence includes image a, image b, and image c.
  • image a, image b and image c can be acquired by the first camera to be annotated; some images in the image sequence are acquired by the first camera to be annotated
  • the image a and the image b may be acquired by the first camera to be labeled
  • the image c may be acquired by the camera A.
  • the position of the first camera to be calibrated when capturing images in the image sequence is fixed.
  • the image sequence includes an image a and an image b, and the image a and the image b are acquired by the first camera to be labeled. Then, the position of the first camera to be calibrated when the image a is collected is the same as the position of the first camera to be calibrated when the image b is collected.
  • the first image is any image in the image sequence.
  • the first image may be image a, and the first image may also be image b.
  • the camera facing the calibration device receives the image sequence of the target object input by the user through the input component to acquire the image sequence of the target object.
  • the above input components include: keyboard, mouse, touch screen, touch pad, audio input and so on.
  • the camera is directed toward the calibration device to receive the image sequence of the target object sent by the terminal to acquire the image sequence of the target object.
  • the above terminal may be any one of the following: a mobile phone, a computer, a tablet computer, and a server.
  • the first camera to be calibrated belongs to a camera orientation calibration device.
  • the camera orientation calibration device acquires an image sequence of the target object by collecting a video stream containing the target object through the first camera to be calibrated.
  • the camera faces the calibration device or the to-be-processed video stream collected by the first camera to be calibrated.
  • the camera orientation calibration device performs target object detection processing on the images in the to-be-processed video stream, determines that the to-be-processed video stream contains the images of the target object, and obtains the image sequence of the target object.
  • the first camera to be calibrated belongs to a camera orientation calibration device.
  • the camera facing calibration device uses the first to-be-calibrated camera to collect the to-be-processed video stream.
  • the camera orientation calibration device performs target object detection processing on the images in the to-be-processed video stream, determines that the to-be-processed video stream contains the images of the target object, and obtains the image sequence of the target object.
  • the moving direction of the target object during the acquisition process of the image sequence is the moving direction of the target object in the real world.
  • the camera orientation calibration device acquires a reference image sequence before performing step 302, wherein the images in the reference image sequence are acquired by the reference camera, and the position of the reference camera when the reference image sequence is acquired is stable.
  • the image sequence formed by the reference image sequence and the image sequence collected by the first camera to be calibrated is called the target image sequence.
  • the camera orientation calibration device Based on the position of the reference camera, the position of the first camera to be calibrated, the time at which the reference camera collects the reference image sequence, and the time at which the first camera to be calibrated collects the image sequence, the camera orientation calibration device obtains the target object in the process of collecting the target image sequence. trajectory. Then, the moving direction of the target object in the process of collecting the target image sequence is obtained as the moving direction of the target object in the process of collecting the image sequence by the first camera to be calibrated.
  • the camera orientation calibration device obtains the orientation of the target object by processing the first image, which is used as the first orientation.
  • the camera orientation calibration device uses an orientation model to process the first image to obtain the first orientation.
  • the orientation model is obtained by training the neural network with the labeled image set as training data, wherein the images in the labeled image set (hereinafter referred to as training images) all contain the target object, and the label of the training image includes the orientation of the target object.
  • the neural network processes the training image to obtain the orientation of the target object in the training image.
  • a first training loss is obtained based on this orientation and the label of the training image. Updating the parameters of the neural network based on the first training loss results in an orientation model.
  • the first camera to be calibrated is a camera that collects the first image.
  • the first image is acquired by the first camera to be calibrated.
  • the first orientation is the angle between the moving direction of the target object and the shooting direction of the first camera to be calibrated.
  • the second orientation is the shooting direction of the first camera to be calibrated.
  • the camera orientation calibration device obtains an included angle between the shooting direction of the first camera to be calibrated and the moving direction of the target object (hereinafter referred to as the target included angle) based on the first orientation. Based on the target angle and the moving direction, the second orientation is obtained.
  • the first orientation is 60 degrees
  • the moving direction of the target object is 30 degrees east by south.
  • the second orientation is the true north direction, that is, the shooting direction of the first camera to be calibrated is the true north direction.
  • the camera orientation calibration device obtains the moving direction of the target object based on the trajectory of the target object in the image sequence. Then, based on the first orientation of the target object in the first image and the moving direction, the orientation of the first camera to be calibrated is obtained. Therefore, the calibration of the orientation of the first camera to be calibrated is completed without human intervention, thereby reducing labor costs and time costs.
  • FIG. 1B is a schematic diagram of a system architecture to which a camera orientation calibration method according to an embodiment of the present application can be applied; as shown in FIG. 1B , the system architecture includes an image acquisition device 2001 , a network 2002 and an image acquisition terminal 2003 .
  • the image capture device 2001 and the image capture terminal 2003 can establish a communication connection through the network 2002, the image capture device 2001 transmits the captured image to the image capture terminal 2003 through the network 2002, the image capture terminal 2003 receives the image, and The image is analyzed, and then the second orientation of the camera to be calibrated is determined based on the first orientation and the moving direction of the target object in the image.
  • the current scene image capture device 2001 may include an image capture device such as a camera.
  • the image acquisition terminal 2003 may include a computer device with a certain computing capability, for example, the computer device includes a terminal device or a server or other processing devices.
  • the network 2002 can be wired or wireless.
  • the image acquisition device 2001 when the image acquisition device 2001 is an image acquisition device and the image acquisition terminal 2003 is a server, the image acquisition device can be connected to the image acquisition terminal through wired connection, for example, through a bus for data communication; when the image acquisition device 2001 image acquisition When the image acquisition terminal 2003 is a terminal device, the image acquisition device can be connected to the image acquisition terminal by means of wireless connection to communicate with each other, and then perform data communication.
  • the image acquisition terminal 2003 may be a vision processing device with a video acquisition module, or a host with a camera.
  • the information processing method of the embodiment of the present application may be executed by the image acquisition terminal 2003 , and the above-mentioned system architecture may not include the network 2002 and the image acquisition device 2001 .
  • the first orientation includes: the front of the target object faces the first camera to be calibrated or the back of the target object faces the first camera to be calibrated.
  • the orientation of the target object is that the front faces the first camera to be calibrated;
  • the orientation of the target object is facing the first camera to be calibrated;
  • the reference angle corresponding to the first orientation is within (90°, 2700°)
  • the target The orientation of the object is that the back faces the first camera to be calibrated.
  • the camera orientation calibration device performs the following steps in the process of performing step 404:
  • the first orientation is that the front of the target object faces the first camera to be calibrated, indicating that the target object moves toward the camera. Therefore, the camera orientation calibration device determines that the orientation of the first camera to be calibrated is opposite to the moving direction, that is, the second orientation is opposite to the moving direction.
  • the first orientation is that the back of the target object faces the first camera to be calibrated, indicating that the target object moves away from the camera. Therefore, the camera orientation calibration device determines that the orientation of the first camera to be calibrated is the same as the moving direction, that is, the second orientation is the same as the moving direction.
  • the first orientation includes two cases.
  • the camera orientation calibration device determines the first orientation based on the reference angle of the first orientation, so that the orientation of the first camera to be calibrated can be determined based on the first orientation and the moving direction. , reduce the amount of data processing.
  • the images in the above-mentioned sequence of images all include time stamps.
  • the camera orientation calibration device performs the following steps in the process of executing step 402:
  • map data includes a target road
  • the target road is the road on which the target object moves. That is, in the process of capturing the image sequence by the first camera to be calibrated, the target object moves on the target object.
  • the target road is the road on which the target object moves.
  • the camera orientation calibration device can obtain the information of the target road based on the map data.
  • the information of the target road includes one or more than one of the following: the width of the target road, the length of the target road, the position of the target road, and the direction of the target road.
  • the direction of the target road includes at least two directions. For example, assuming that the direction of the target road is north-south, the direction includes two directions: from south to north and from north to south. For another example, assuming that the direction of the target road is 30 degrees west by north and 30 degrees south by east, then the direction includes two directions: 30 degrees north west and 30 degrees south east.
  • the camera is directed toward the calibration device to acquire map data by receiving map data input by a user through an input component.
  • the camera is directed toward the calibration device to receive map data sent by the terminal to acquire map data.
  • the camera orientation calibration device obtains the trajectory of the target object in the image sequence based on the position of the target object in the image sequence and the acquisition time of the image sequence.
  • the sequence of images includes an image a and an image b, wherein the acquisition time of image a is t1, the acquisition time of image b is t2, and t1 is earlier than t2.
  • image a the position of the target object is (3, 4).
  • image b the position of the target object is (5, 4). Then the trajectory of the target object in the image sequence is, the target object is located at (3, 4) at time t1, and the target object is located at (5, 4) at time t2.
  • the camera orientation calibration device acquires a reference image sequence before performing step 4, wherein the images in the reference image sequence are acquired by the reference camera, and the position of the reference camera when the reference image sequence is acquired It is fixed.
  • the image sequence formed by the reference image sequence and the image sequence collected by the first camera to be calibrated is called the target image sequence.
  • the camera orientation calibration device Based on the position of the reference camera, the position of the first camera to be calibrated, the time at which the reference camera collects the reference image sequence, and the time at which the first camera to be calibrated collects the image sequence, the camera orientation calibration device obtains the target object in the process of collecting the target image sequence.
  • the trajectory as the trajectory of the target object in the process of capturing the image sequence by the first camera to be calibrated.
  • the camera orientation calibration device can obtain the moving direction of the target object in the image sequence based on the trajectory (hereinafter referred to as the reference moving direction). Determine the included angle between the reference moving direction and the direction of the target road, and obtain the moving direction of the target object based on the included angle and the direction of the target road.
  • the camera orientation calibration device may determine, based on the trajectory, that the moving direction of the target object in the image sequence is the positive direction of the horizontal axis of the pixel coordinate system.
  • the direction of the target road is a north-south direction, that is, the direction of the target road includes a direction from due south to due north (hereinafter referred to as direction 1) and a direction from due north to due south (hereinafter referred to as direction 2). If it is determined that in the image sequence, the included angle between the direction of the target road and the reference moving direction is, the reference moving direction is rotated 60 degrees counterclockwise to coincide with the direction 1. That is, in the real world, the moving direction of the target object is rotated 60 degrees counterclockwise to coincide with the direction of the target road. Assuming that the direction of the target road is the north-south direction, and the camera faces the calibration device, it can be determined that the moving direction of the target object is 60 degrees north by east.
  • the images in the image sequence are acquired by the first camera to be calibrated.
  • the camera orientation calibration device performs the following steps in the process of performing step 4:
  • the position of the first camera to be calibrated when the image sequence is collected is fixed.
  • the pixel coordinate system of the image sequence is the pixel coordinate system of any image in the image sequence.
  • the trajectory of the target object in the image sequence can be obtained, that is, the trajectory of the target object in the pixel coordinate system of the image sequence, that is, the second trajectory.
  • the camera takes the second track as the first track, that is, the first track is the track of the target object in the pixel coordinate system of the image sequence.
  • the sequence of images includes an image a and an image b, wherein the acquisition time of image a is t1, the acquisition time of image b is t2, and t1 is earlier than t2.
  • image a the position of the target object is (3, 4).
  • image b the position of the target object is (5, 4).
  • the trajectory (ie, the second trajectory) of the target object in the image sequence is, the target object is located at (3, 4) at time t1, and the target object is located at (5, 4) at time t2.
  • the camera orientation calibration device performs the following steps in the process of performing step 5:
  • the camera faces the calibration device to perform road detection processing on the image sequence to determine the position of the target road in the pixel coordinate system of the image sequence. Based on the positions of the first trajectory and the target road, the angle between the first trajectory and the target road (ie, the angle between the second trajectory and the target road), that is, the first angle is obtained.
  • the camera orientation calibration device can determine the direction of the target road based on the map data. Based on the first included angle, a direction matching the first trajectory may be determined from the direction of the target road as the moving direction of the target object.
  • the direction of the target road is a north-south direction.
  • the direction of the target road includes two directions, from south to north and from north to south.
  • the first direction is from south to north and the second direction is from north to south.
  • the camera faces the calibration device by performing step 7 to determine that in the pixel coordinate system of the image sequence, the included angle between the first trajectory and the first direction is the first included angle.
  • the camera faces the calibration device to determine that the first trajectory matches the first direction, thereby determining that the moving direction of the target object is from south to north; if the first included angle is between (90° °, 180°), the camera faces the calibration device to determine that the first trajectory matches the second direction, so as to determine that the moving direction of the target object is from north to south.
  • area A includes 100 cameras, and these 100 cameras all have a communication connection with the server of the camera management center, and the server can obtain the video streams collected by the 100 cameras through the communication connection. .
  • the staff in area A wants to calibrate the orientation of camera B among the 100 cameras. Then, the staff can obtain the image sequence C collected by the camera B through the server, where the image sequence C includes the target object (for example, the image sequence C includes Zhang San).
  • the server further processes the image sequence C based on the technical solution provided by this embodiment, that is, the to-be-processed image sequence C is used as the image sequence in this embodiment, and the camera B is used as the first camera to be marked, and the image sequence of camera B is obtained. towards.
  • the image sequence includes a first image subsequence and a second image subsequence
  • the images in the first image subsequence are acquired by the first camera to be calibrated
  • the images in the second image subsequence are obtained by The second camera to be calibrated is acquired.
  • the first camera to be calibrated is different from the second camera to be calibrated, and the position of the second camera to be calibrated when collecting the second image subsequence is different from the position of the first camera to be calibrated when collecting the first image subsequence.
  • the position of the first camera to be calibrated is the position of the first camera to be calibrated in the real world.
  • the position of the second camera to be calibrated is the position of the second camera to be calibrated in the real world.
  • the image sequence includes image a, image b, image c, and image d, wherein image a and image b belong to the first image subsequence, and image c and image d belong to the second image subsequence.
  • the image a and the image b are acquired by the first camera to be calibrated, and the image c and the image d are acquired by the second camera to be calibrated.
  • position 1 and position 2 are different.
  • the direction of the target road includes two directions.
  • the direction of the target road includes a first direction and a second direction.
  • the camera performs the following steps during step 4:
  • the above target is obtained.
  • the trajectory of the object in the real world is taken as the above-mentioned first trajectory.
  • the camera orientation calibration device can determine the sequence of collecting the first image subsequence and the second image subsequence based on the acquisition time of the images in the first image subsequence and the acquisition time of the images in the second image subsequence, and further can Determine the sequence in which the target object passes through the position of the first camera to be calibrated and the position of the second camera to be calibrated.
  • the camera orientation calibration device can obtain the trajectory of the target object in the real world, that is, the third trajectory, based on the position of the first camera to be calibrated and the position of the second camera to be calibrated.
  • the camera faces the calibration device and takes the third trajectory as the first trajectory, that is, the first trajectory is the trajectory of the target object in the real world.
  • the position of the first camera to be calibrated is position 1
  • the position of the second camera to be calibrated is position 2
  • the acquisition time of the latest image in the first image subsequence is t1
  • the latest image in the second image subsequence is The acquisition time of the image is t2, wherein the latest image is the image with the latest acquisition time in the image subsequence. If t1 is earlier than t2, the camera faces the calibration device to obtain a third trajectory, wherein the third trajectory includes the target object appearing at position 1 at t1, and after moving, appearing at position 2 at t2.
  • the first camera to be calibrated and the second camera to be calibrated in this embodiment are only examples, and it should not be understood that the images in the image sequence are only acquired by two cameras.
  • the image sequence in addition to the images captured by the first calibration camera, the image sequence also includes images captured by one camera or images captured by more than one camera.
  • the image sequence includes image a, image b, image c, image d, image e and image f, wherein image a and image b are acquired by the first camera to be calibrated, and image c and image d are acquired by the second camera to be calibrated acquisition, the image e and the image f are acquired by the third camera to be calibrated.
  • the camera orientation calibration device can obtain the third trajectory based on the position of each camera in the target camera group and the time when each camera collects images. At this time, the position of each camera in the target camera group is fixed when capturing images in the image sequence, and the positions of any two cameras are different.
  • the image sequence includes image a, image b, image c, image d, image e, and image f, wherein image a and image b belong to the first image subsequence, image c and image d belong to the second image subsequence, and image e and image f belong to the third image subsequence.
  • the first image subsequence is acquired by the first camera to be calibrated
  • the second image subsequence is acquired by the second camera to be calibrated
  • the third image subsequence is acquired by the third camera to be calibrated.
  • the position of the first camera to be calibrated when collecting the first image subsequence is position 1
  • the position of the second camera to be calibrated when collecting the second image subsequence is position 2
  • the position of the third camera to be calibrated when collecting the third image subsequence is position 2.
  • the position is position 3, and position 1 and position 2 are different, position 2 and position 3 are different, and position 1 and position 3 are different.
  • the acquisition time of the latest image in the first image subsequence is t1
  • the acquisition time of the latest image in the second image subsequence is t2
  • the acquisition time of the latest image in the third image subsequence is t3, wherein,
  • the latest image is the image with the latest acquisition time in the image subsequence. If t1 is earlier than t2 and t2 is earlier than t3, the camera faces the calibration device to obtain a third trajectory, wherein the third trajectory includes the target object appearing at position 1 at t1, and after moving, at t2 at position 2, after moving, at t3 appears at position 3.
  • the camera orientation calibration device can be obtained based on the position of each camera in the target camera group and the time when each camera collects images.
  • the camera facing calibration device performs the following steps in the process of performing step 5:
  • the matching of the first track and the first direction refers to the matching of the first track direction with the first direction
  • the matching of the first track and the second direction refers to the matching of the first track direction with the second direction
  • the direction of the target road includes: north-south direction and east-west direction, wherein the north-south direction includes: the direction from south to north and the direction from north to south, and the east-west direction includes: the direction from east to west direction and direction from west to east.
  • the first direction can be from south to north
  • the second direction is from north to south
  • the first direction can also be The direction is from north to south
  • the second direction is the direction from south to north.
  • the south direction at this time includes: due south, south by east, and south by west
  • determine the first track and the direction from north to south Matching the camera is facing the calibration device to determine that the moving direction of the target object is due south
  • the direction of the target road is north-south
  • the first trajectory direction is north
  • the south direction at this time includes: due north, east-north, west-north
  • the camera moves toward the calibration device to determine that the moving direction of the target object is true north
  • the direction of the target road is east-west
  • the first trajectory direction is east (this When the south direction includes: due east, north to east, south to east), it is determined that the first trajectory matches the direction from west to east, and the camera is directed toward the calibration device to determine that the moving direction of the target object is due east; If the direction of the road is east-west, and the direction of the first track is west (
  • the camera facing calibration device determines the moving direction of the target object through steps 10 and 11, which can reduce the amount of data processing and improve the processing speed.
  • area A includes 100 cameras, and these 100 cameras all have a communication connection with the server of the camera management center, and the server can obtain the video streams collected by the 100 cameras through the communication connection. .
  • the staff in area A wants to calibrate the orientation of camera B among the 100 cameras. Then, the staff can obtain the image sequence C collected by the B camera and the image sequence E collected by the D camera in the 100 cameras through the server, wherein the image sequence C and the image sequence E both contain the target object (such as the image sequence C and Image sequence E all contain Zhang San).
  • the server takes the image sequence C as the first image subsequence and the image sequence E as the second image subsequence to obtain the image sequence F.
  • the server further processes the image sequence F based on the technical solution provided by this embodiment, that is, the to-be-processed image sequence F is used as the image sequence in this embodiment, and the B camera is used as the first camera to be marked, and the image sequence of the B camera is obtained. towards.
  • the above-mentioned first image is an image with the largest time stamp in the above-mentioned image sequence.
  • the calibration device can improve the matching degree between the first orientation and the target object, thereby improving the orientation accuracy of the first camera to be calibrated.
  • the above-mentioned image sequence further includes a second image different from the above-mentioned first image, and the second image is acquired by the first camera to be calibrated.
  • the camera also performs the following steps towards the calibration device:
  • step 1 The implementation of this step may refer to step 1, wherein the first image corresponds to the second image, and the first orientation corresponds to the third orientation.
  • step 404 wherein the first orientation corresponds to the third orientation, and the second orientation corresponds to the fourth orientation. That is, the second orientation is the orientation of the first camera to be calibrated determined by the camera orientation calibration device based on the first image, and the fourth orientation is the orientation of the first camera to be calibrated determined by the camera orientation calibration device based on the second image.
  • the camera orientation calibration device obtains the fifth orientation of the first camera to be calibrated based on the second orientation and the fourth orientation, which can improve the first orientation. The accuracy of the orientation of the camera to be calibrated.
  • the camera orientation calibration device averages the second orientation and the fourth orientation to obtain the fifth orientation.
  • the camera orientation calibration device first determines a fourth orientation based on the second image, and then obtains a fifth orientation based on the second orientation and the fourth orientation, which can improve the orientation accuracy of the first camera to be calibrated.
  • the camera orientation calibration device performs the following steps in the process of performing step 14:
  • the greater the timestamp of the first image the greater the value of the first weight
  • the greater the timestamp of the second image the greater the value of the second weight
  • the camera orientation calibration device determines the weight of the orientation corresponding to the image based on the time stamp of the image, and the orientation of the first camera to be calibrated can be obtained based on the orientation and the weight of the orientation, which can improve the first orientation. Weights to calibrate the camera.
  • the camera orientation calibration device may determine an orientation based on each image in the image sequence, and determine a weight for each orientation.
  • the orientation of the first camera to be calibrated is obtained by performing a weighted average of all orientations.
  • the camera orientation calibration device performs the following steps in the process of performing step 15:
  • the sequence of images includes a first image, a second image, a third image, a fourth image.
  • the image set may include the third image, the image set may also include the fourth image, and the image set may further include the third image and the fourth image.
  • the camera orientation calibration device obtains the sixth orientation of the first camera to be calibrated based on the third image, and the orientation set includes the sixth orientation at this time; in the case that the image set includes the fourth image, The camera orientation calibration device obtains the seventh orientation of the first camera to be calibrated based on the fourth image, and the orientation set includes the seventh orientation; when the image set includes the third image and the fourth image, the camera orientation calibration device is based on the third orientation.
  • the image obtains the sixth orientation of the first camera to be calibrated, and obtains the seventh orientation of the first camera to be calibrated based on the fourth image. At this time, the orientation set includes the sixth orientation and the seventh orientation.
  • the angle of the first target orientation is the same as the angle of the second orientation. For example, if the second orientation is 60 degrees east-south, then the first target orientation is 60 degrees east-south.
  • the orientation of the first target orientation is the same as the orientation of the second orientation. For example, if the second orientation is due east, then the first target orientation is due east; if the second orientation is due north, then the first target orientation is due north.
  • the angle of the second target orientation is the same as the angle of the fourth orientation. For example, if the fourth orientation is 60 degrees east-south, then the second target orientation is 60 degrees east-south.
  • the orientation of the second target orientation is the same as the orientation of the fourth orientation. For example, if the fourth orientation is due east, then the second target orientation is due east; if the fourth orientation is due north, then the second target orientation is due north.
  • the ratio of the first quantity to the first weight is called the first ratio
  • the second quantity to the second weight is called the second ratio
  • the first ratio and the second ratio are the same.
  • N 1 , N 2 , W 1 , W 2 satisfy The following formula:
  • N 1 , N 2 , W 1 , W 2 satisfy the following formula:
  • k is a positive number and c is a non-negative number.
  • c is a non-negative number.
  • the camera orientation calibration device before performing step 16, the camera orientation calibration device further performs the following steps:
  • the direction angle refers to the included angle with the true north direction in the direction coordinate system, wherein the direction coordinate system can be referred to in FIG. 5 .
  • the orientation includes due south, due north, due east, and due west.
  • mapping relationship there is a mapping relationship between due north and 0 degrees, a mapping relationship between due west and 90 degrees, a mapping relationship between due south and 180 degrees, and a mapping relationship between due east and 270 degrees.
  • the camera orientation calibration device After obtaining the first angle and the second angle, the camera orientation calibration device performs the following steps in the process of performing step 16:
  • a weighted average of the first angle and the second angle is performed to obtain a third angle, which is used as the fifth orientation.
  • the first weight is 10
  • the first angle is 90 degrees
  • the second weight is 15, and the second angle is 180 degrees.
  • the camera orientation calibration apparatus maps the second orientation and the fourth orientation to the first angle and the second angle, respectively, based on the mapping relationship.
  • the third angle is obtained by weighted averaging of the first angle and the second angle, and the third angle is used as the fifth orientation, so as to improve the accuracy of the orientation of the camera to be marked.
  • the orientation of the camera to be marked based on the images in the image sequence is one of due south, due north, due east, and due west, but by performing steps 21 to 24, the orientation of the camera to be marked can be accurately determined to other angles.
  • the second orientation is due west
  • the fourth orientation is due south
  • the obtained fifth orientation is 144°.
  • step 24 the camera faces the calibration device to obtain the third angle by performing the following steps:
  • the above-mentioned first vector is a vector in which the center of the above-mentioned reference circle points to the above-mentioned first point, and the above-mentioned reference circle is in the above-mentioned rectangular coordinate system.
  • the coordinate axis may be the horizontal axis of the rectangular coordinate system, and the coordinate axis may also be the vertical axis of the rectangular coordinate system.
  • the coordinate axis is the horizontal axis of the rectangular coordinate system
  • the center of the reference circle is 0, and the radius is 1.
  • the coordinates of the first point are (sin ⁇ , cos ⁇ ).
  • the above-mentioned second angle Map the above-mentioned second angle to the second point on the reference circle, the above-mentioned second angle is the same as the third angle, the above-mentioned third angle is the angle between the second vector and the above-mentioned coordinate axis, the above-mentioned second The vector is the vector whose center of the circle points to the second point.
  • the third vector is (0.4, 0.6). If the coordinate axis is the horizontal axis of the rectangular coordinate system, and the third angle is ⁇ , then ⁇ satisfies the following formula: (sin -1 0.4, cos -1 0.6).
  • the angle is directly weighted and averaged, it will easily lead to a large error.
  • the first angle is 0°
  • the second angle is 359°
  • the first weight and the second weight are both 1.
  • the third angle obtained by the weighted average of the first angle and the second angle is 179.5°. That is, the orientation of the third angle is close to due south, but the orientation of the first angle and the orientation of the second angle are both close to due north, which obviously has a large error.
  • the embodiment of the present application also provides a possible application scenario.
  • cameras will be installed in various areas for security protection based on the video stream information collected by the cameras, such as determining the whereabouts of the target person from the video stream.
  • the orientation of the camera can be determined.
  • how to efficiently and accurately determine the orientation of a large number of cameras is of great significance.
  • the managers of the A site want to calibrate the cameras in the A region.
  • the orientation of the cameras is calibrated manually, it will bring a large labor cost and the calibration efficiency is low.
  • the labor cost for calibrating the orientation of the camera can be reduced, and the calibration efficiency can be improved.
  • the administrator establishes a communication connection between the server (that is, the above-mentioned camera facing the calibration device) and all cameras in area A. Through this communication connection, the server can transmit data with any camera in area A.
  • the server obtains the first to-be-processed video stream collected by the camera in the A region through the communication connection at the first time.
  • the server determines, from the first to-be-processed video stream, the video stream whose acquisition time is within the preset time, and obtains the second to-be-processed video stream.
  • the preset time is 1 minute.
  • the server obtains the first video stream to be processed at 9:13:2 on March 2, 2021, that is, the first time is 9:13:2 on March 2, 2021.
  • the server selects the video stream whose collection time is between 9:12:2 on March 2, 2021 and 9:13:2 on March 2, 2021 from the first to-be-processed video stream as the second to-be-processed video stream .
  • the server performs human body detection processing on the second to-be-processed video stream, and selects images containing human bodies from the second to-be-processed video stream to obtain a first to-be-processed image set.
  • the server performs human body clustering processing on the images in the first to-be-processed image set, so as to determine at least one image containing the same human body from the first to-be-processed image set to obtain a second to-be-processed image set, wherein the human body clustering processing refers to a The similarity between the human features of the images is clustered.
  • the first image set to be processed includes: image a, image b, image c, and image d.
  • the server determines that the human body included in image a is the same as the human body included in image b, and the human body included in image c is the same as the human body included in image d.
  • the image set composed of image a and image b is the second image set to be processed, and the image set composed of image c and image d is also the second image set to be processed.
  • the server obtains the trajectory of the person in the second image set to be processed (that is, the target object is a person, and the person in the second image set to be processed is hereinafter referred to as the first target person).
  • the second image set to be processed includes image a and image b, wherein both image a and image b include Zhang San, image a is captured by the first camera, image b is captured by the second camera, and image a is captured by the second camera.
  • the acquisition time is t1, the acquisition time of image b is t2, and t1 is earlier than t2.
  • the server may further determine that Zhang San is at the position of the first camera at time t1, and Zhang San is at the position of the second camera at time t2.
  • the second image set to be processed includes image a, image b, and image c, wherein, image a, image b, and image c all include Zhang San, image a is captured by the first camera, and image b and image c is acquired by the second camera, the acquisition time of image a is t1, the acquisition time of image b is t2, and the acquisition time of image c is t3, and t1 is earlier than t2, and t2 is earlier than t3.
  • the server may further determine that Zhang San is at the position of the first camera at time t1, and Zhang San is at the position of the second camera at time t2 and time t3.
  • the server acquires map data, and determines the direction of the first target road where the first target person is located based on the map data.
  • the server determines the moving direction of the first target person based on the moving direction of the first target person and the direction of the first target road. For example, assume that in Example 1, the second camera is located in the northeast direction of the first camera. If the direction of the first target road is the north-south direction, the moving direction of Zhang San is the north-south direction.
  • Example 2 it is assumed that in Example 2, the second camera is located in the southwest direction of the first camera. If the direction of the first target road is the north-south direction, then the moving direction of Zhang San is the south direction.
  • the image sequence includes image b and image c.
  • the moving direction of Zhang San during the process of collecting the image sequence is also due south.
  • the server determines the image with the largest time stamp in the second image set to be processed, and obtains the first image to be processed.
  • the server determines the orientation of the first target person in the first image to be processed.
  • the server obtains the first orientation of the camera based on the orientation of the first target person and the moving direction of the first target person.
  • the orientation of the first target person is facing the surveillance camera, and the moving direction of the first target person is due north, then the first orientation of the camera is due south.
  • the server can obtain the first orientation of the camera based on the first video stream to be processed obtained at the first time.
  • the server can obtain the second orientation of the camera based on the video stream collected by the camera obtained at the second time, ..., the server can obtain the nth orientation of the camera based on the video stream collected by the camera obtained at the nth time.
  • the first time, the second time, . . . , and the nth time are all different in pairs.
  • the server determines the angles corresponding to the first orientation, the second orientation, ..., and the nth orientation, respectively, based on the following table.
  • the server determines the number of orientations with an angle of 0°, resulting in N1.
  • the server determines the number of orientations with an angle of 90°, resulting in N2.
  • the server determines the number of orientations with an angle of 180°, resulting in N3.
  • the server determines the number of orientations with an angle of 270°, resulting in N4.
  • the server maps 0° to the reference circle whose center is the coordinate origin and the radius is 1. For example, suppose the angle corresponding to the orientation is ⁇ , and the coordinates of ⁇ on the reference circle are: (sin ⁇ , cos ⁇ ). In this way, the coordinate of the point obtained by the server mapping 0° to the reference circle is (0, 1), the server mapping 90° to the coordinate of the point obtained on the reference circle is (1, 0), and the server mapping 180° to The coordinates of the point obtained on the reference circle are (0, -1), and the coordinates of the point obtained by the server mapping 270° to the reference circle are (-1, 0).
  • the server determines the angle corresponding to the orientation of the camera based on the following formula:
  • the server determines the orientation of the camera based on ⁇ and Table 1.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the execution order of each step should be based on its function and possible intrinsic Logical OK.
  • FIG. 6 is a schematic structural diagram of a camera orientation calibration device according to an embodiment of the present application.
  • the device 1 includes: an acquisition unit 11 , a first processing unit 12 , a second processing unit 13 , and a third processing unit 14 ,in:
  • an acquisition unit 11 configured to acquire an image sequence of the target object, where the image sequence includes a first image
  • the first processing unit 12 is configured to obtain the moving direction of the target object in the acquisition process of the image sequence according to the image sequence;
  • the second processing unit 13 is configured to determine the first orientation of the target object in the first image
  • the third processing unit 14 is configured to obtain the second orientation of the first camera to be calibrated according to the first orientation and the moving direction, where the first camera to be calibrated is the camera that captures the first image.
  • the first orientation includes: the front of the target object faces the first camera to be calibrated or the back of the target object faces the first camera to be calibrated, and the third process Unit 14, configured as:
  • the second orientation is the same as the moving direction.
  • the third processing unit 14 is configured as:
  • map data includes a target road
  • target road is the road on which the target object moves
  • the moving direction of the target object is obtained.
  • the images in the image sequence are acquired by the first camera to be calibrated
  • the third processing unit 14 is configured as:
  • the trajectory of the target object in the image sequence is obtained as the first trajectory ;
  • the moving direction of the target object is obtained.
  • the image sequence includes a first image subsequence and a second image subsequence, the images in the first image subsequence are acquired by the first camera to be calibrated, and the second image subsequence is acquired by the camera to be calibrated. The images in the image subsequence are acquired by the second camera to be calibrated;
  • the third processing unit 14 is configured as:
  • the acquisition time of the images in the first image subsequence the acquisition time of the images in the second image subsequence, the position of the first camera to be calibrated and the position of the second camera to be calibrated, obtain
  • the trajectory of the target object in the real world is taken as the first trajectory.
  • the direction of the target road includes a first direction and a second direction
  • the third processing unit 14 is configured as:
  • determining that the moving direction is the first direction
  • the moving direction is determined to be the second direction.
  • the first image is an image with the largest time stamp in the image sequence.
  • the image sequence further includes a second image different from the first image, the second image is acquired by the first camera to be calibrated, and the third processing unit 14 further Configured as:
  • the fifth orientation of the first camera to be calibrated is obtained.
  • the third processing unit 14 is further configured to:
  • the second orientation and the fourth orientation are weighted and averaged to obtain the fifth orientation.
  • the third processing unit 14 is further configured to:
  • the orientation of the first target orientation is the same as the orientation of the second orientation
  • the first weight is obtained according to the first quantity, and the first weight is positively correlated with the first quantity.
  • the third processing unit 14 is further configured to:
  • the orientation of the second target orientation is the same as the orientation of the fourth orientation
  • the second weight is obtained according to the second quantity, and the second weight is positively correlated with the second quantity.
  • the obtaining unit 11 is further configured to: perform a weighted average on the second orientation and the fourth orientation according to the first weight and the second weight, Before obtaining the fifth orientation, obtain the mapping relationship between the orientation and the direction angle;
  • the third processing unit 14 is further configured to:
  • the first angle and the second angle are weighted and averaged to obtain a third angle, which is used as the fifth orientation.
  • the third processing unit 14 is further configured to:
  • the first angle is mapped to the first point on the reference circle, the first angle is the same as the second angle, and the second angle is the angle between the first vector and the coordinate axis of the rectangular coordinate system , the first vector is a vector with the center of the reference circle pointing to the first point, and the reference circle is in the Cartesian coordinate system;
  • the second angle is mapped to the second point on the reference circle, the second angle is the same as the third angle, and the third angle is the angle between the second vector and the coordinate axis, so
  • the second vector is a vector with the center of the circle pointing to the second point;
  • weighted average is performed on the coordinates of the first point and the coordinates of the second point to obtain a third point
  • the angle between the third vector and the coordinate axis is determined to obtain the third angle, where the third vector is a vector with the center of the circle pointing to the third point.
  • the first camera to be calibrated includes a camera.
  • the functions or modules included in the apparatus provided in the embodiments of the present application may be configured to execute the methods described in the above method embodiments, and the implementation of the above method embodiments may refer to the descriptions of the above method embodiments. Repeat.
  • FIG. 7 is a schematic diagram of a hardware structure of an apparatus for calibrating a camera orientation according to an embodiment of the present application.
  • the camera orientation calibration device 2 includes a processor 21 , a memory 22 , an input device 23 , and an output device 24 .
  • the processor 21, the memory 22, the input device 23, and the output device 24 are coupled through a connector, and the connector includes various types of interfaces, transmission lines, or buses, etc., which are not limited in this embodiment of the present application. It should be understood that, in various embodiments of the present application, coupling refers to mutual connection in a specific manner, including direct connection or indirect connection through other devices, such as various interfaces, transmission lines, and buses.
  • the processor 21 may be one or more graphics processing units (graphics processing units, GPUs).
  • the GPU may be a single-core GPU or a multi-core GPU.
  • the processor 21 may be a processor group composed of multiple GPUs, and the multiple processors are coupled to each other through one or more buses.
  • the processor may also be another type of processor, etc., which is not limited in this embodiment of the present application.
  • the memory 22 may be configured to store computer program instructions and various types of computer program codes, including program codes configured to execute the solutions of the embodiments of the present application.
  • the memory includes, but is not limited to, random access memory (RAM), read-only memory (read-only memory, ROM), erasable programmable read-only memory (erasable programmable read only memory, EPROM) ), or a portable read-only memory (compact disc read-only memory, CD-ROM), which is configured for related instructions and data.
  • the input device 23 is configured to input at least one of the following: data, a signal, and the output device 24 is configured to output at least one of the following: data, signal.
  • the input device 23 and the output device 24 may be independent devices or may be an integral device.
  • the memory 22 can be configured not only to store related instructions, but also to store related data. It may be configured to store the second orientation obtained through the processor 21, etc., and the data stored in the memory is not limited in this embodiment of the present application.
  • FIG. 7 only shows a simplified design of the camera orientation calibration device.
  • the camera orientation calibration device may also include other necessary components, including but not limited to any number of input/output devices, processors, memories, etc., and all camera orientation calibration devices that can implement the embodiments of the present application are within the scope of protection of this application.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted over a computer-readable storage medium.
  • the computer instructions can be sent from a website site, computer, server, or data center via wired (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.) another website site, computer, server or data center for transmission.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media.
  • the available media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, digital versatile disc (DVD)), or semiconductor media (eg, solid state disk (SSD)) )Wait.
  • the process can be completed by instructing the relevant hardware by a computer program, and the program can be stored in a computer-readable storage medium.
  • the program When the program is executed , which may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: read-only memory (ROM) or random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.
  • the embodiments of the present disclosure disclose a camera orientation calibration method, device, device, storage medium, and program, wherein a camera orientation calibration method is performed by an electronic device, and the method includes: acquiring an image sequence of a target object, and the image The sequence includes a first image; based on the image sequence, the moving direction of the target object in the acquisition process of the image sequence is obtained; the first orientation of the target object in the first image is determined; based on the first orientation and the moving direction, the first camera to be calibrated is obtained The second orientation of the first camera to be calibrated is the camera that collects the first image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种摄像头朝向标定方法及相关产品。该方法包括:获取目标对象的图像序列,所述图像序列包括第一图像;依据所述图像序列,得到所述目标对象在所述图像序列的采集过程中的移动方向;确定所述第一图像中所述目标对象的第一朝向;依据所述第一朝向和所述移动方向,得到所述第一待标定摄像头的第二朝向,所述第一待标定摄像头为采集所述第一图像的摄像头;从而,在不需要人力介入的情况下,完成对第一待标定摄像头的朝向的标定,由此减少人力成本和时间成本。

Description

摄像头朝向标定方法、装置、设备、存储介质及程序
相关申请的交叉引用
本专利申请要求2021年03月25日提交的中国专利申请号为202110319369.9,申请人为深圳市商汤科技有限公司,申请名称为“摄像头朝向标定方法及相关产品”的优先权,该申请的全文以引用的方式并入本申请实施例中。
技术领域
本申请涉及方向标定技术领域,尤其涉及一种摄像头朝向标定方法、装置、设备、存储介质及程序。
背景技术
随着科技的发展,成像设备的应用场景越来越多。而成像设备中的摄像头的拍摄角度对拍摄得到高质量的图像至关重要。因此,如何标定摄像头的朝向具有非常重要的意义。
发明内容
本申请实施例提供一种摄像头朝向标定方法、装置、设备、存储介质及程序。
本申请实施例提供了一种摄像头朝向标定方法,所述方法由电子设备执行,所述方法包括:
获取目标对象的图像序列,所述图像序列包括第一图像;
基于所述图像序列,得到所述目标对象在所述图像序列的采集过程中的移动方向;
确定所述第一图像中所述目标对象的第一朝向;
基于所述第一朝向和所述移动方向,得到第一待标定摄像头的第二朝向,所述第一待标定摄像头为采集所述第一图像的摄像头。
结合本申请任一实施例,所述第一朝向包括:所述目标对象的正面朝向所述第一待标定摄像头或所述目标对象的背面朝向所述第一待标定摄像头,所述基于所述第一朝向和所述移动方向,得到所述第一待标定摄像头的第二朝向,包括:
在所述第一朝向为所述目标对象的正面朝向所述第一待标定摄像头的情况下,确定所述第二朝向与所述移动方向相反;
在所述第一朝向为所述目标对象的背面朝向所述第一待标定摄像头的情况下,确定所述第二朝向与所述移动方向相同。
本申请实施例中,第一朝向包括两种情况下,摄像头朝向标定装置基于第一朝向的参考夹角确定第一朝向,从而可在基于第一朝向和移动方向确定第一待标定摄像头的朝向时,减少数据处理量。
结合本申请任一实施例,所述基于所述图像序列,得到所述目标对象在所述图像序列的采集过程中的移动方向,包括:
获取地图数据,所述地图数据包括目标道路,所述目标道路为所述目标对象移动时所在的道路;
基于所述图像序列,得到所述目标对象的第一轨迹;
基于所述第一轨迹和所述目标道路,得到所述目标对象的移动方向。
本申请实施例中,目标道路为目标对象移动时所在的道路。在地图数据包括目标道路的情况下,摄像头朝向标定装置基于地图数据,可得到目标道路的信息。
本申请实施例中,摄像头朝向标定装置基于目标对象在图像序列中的位置,以及图像序列的采集时间,得到目标对象在图像序列中的轨迹。
本申请实施例中,摄像头朝向标定装置基于轨迹,可得到目标对象在图像序列中的移动方向(下文将称为参考移动方向)。确定参考移动方向和目标道路的走向之间的夹角,并基于该夹角和目标道路的走向,得到目标对象的移动方向
结合本申请任一实施例,所述图像序列中的图像由所述第一待标定摄像头采集得到;
所述基于所述图像序列,得到所述目标对象的第一轨迹,包括:
基于所述目标对象在所述图像序列的像素坐标系下的位置,以及所述图像序列中的图像的采集时间,得到所述目标对象在所述图像序列中的轨迹,作为所述第一轨迹;
所述基于所述第一轨迹和所述目标道路,得到所述目标对象的移动方向,包括:
基于所述图像序列,确定所述第一轨迹和所述目标道路之间的第一夹角;
基于所述第一夹角和所述目标道路的走向,得到所述目标对象的移动方向。
结合本申请任一实施例,所述图像序列包括第一图像子序列和第二图像子序列,所述第一图像子序列中的图像由所述第一待标定摄像头采集得到,所述第二图像子序列中的图像由第二待标定摄像头采集得到;
所述基于所述图像序列,得到所述目标对象的第一轨迹,包括:
基于所述第一图像子序列中的图像的采集时间、所述第二图像子序列中的图像的采集时间、所述第一待标定摄像头的位置和所述第二待标定摄像头的位置,得到所述目标对象在真实世界下的轨迹,作为所述第一轨迹。
结合本申请任一实施例,所述目标道路的走向包括第一方向和第二方向;
所述基于所述第一轨迹和所述目标道路,得到所述目标对象的移动方向,包括:
在确定所述第一轨迹和所述第一方向匹配的情况下,确定所述移动方向为所述第一方向;
在确定所述第一轨迹和所述第二方向匹配的情况下,确定所述移动方向为所述第二方向。
结合本申请任一实施例,所述第一图像为所述图像序列中时间戳最大的图像。
结合本申请任一实施例,所述图像序列还包括不同于所述第一图像的第二图像,所述第二图像由所述第一待标定摄像头采集得到,所述方法还包括:
确定所述第二图像中的所述目标对象的第三朝向;
基于所述第三朝向和所述移动方向,得到所述第一待标定摄像头的第四朝向;
基于所述第二朝向和所述第四朝向,得到所述第一待标定摄像头的第五朝向。
本申请实施例中,摄像头朝向标定装置首先基于第二图像确定第四朝向,再基于第二朝向和第四朝向得到第五朝向,可提高第一待标定摄像头的朝向的准确度。
结合本申请任一实施例,所述基于所述第二朝向和所述第四朝向,得到所述第一待标定摄像头的第五朝向,包括:
获取所述第二朝向的第一权重和所述第四朝向的第二权重;
基于所述第一权重和所述第二权重,对所述第二朝向和所述第四朝向进行加权平均,得到所述第五朝向。
结合本申请任一实施例,所述获取所述第二朝向的第一权重,包括:
基于图像集中的至少一张图像确定所述第一待标定摄像头的朝向,得到朝向集,所述图像集包括所述图像序列中除所述第一图像和所述第二图像之外的图像;
确定所述朝向集中的第一目标朝向的数量,得到第一数量,所述第一目标朝向的朝向与所述第二朝向的朝向相同;
基于所述第一数量得到所述第一权重,所述第一权重和所述第一数量呈正相关。
结合本申请任一实施例,所述获取所述第四朝向的第二权重,包括:
确定所述第一待标定摄像头的朝向集中的第二目标朝向的数量,得到第二数量,所述第二目标朝向的朝向与所述第四朝向的朝向相同;
基于所述第二数量得到所述第二权重,所述第二权重和所述第二数量呈正相关。
结合本申请任一实施例,在所述基于所述第一权重和所述第二权重,对所述第二朝向和所述第四朝向进行加权平均,得到所述第五朝向之前,所述方法还包括:
获取朝向与方向角之间的映射关系;
基于所述映射关系确定与所述第二朝向之间具有映射关系的第一角度;
基于所述映射关系确定与所述第四朝向之间具有映射关系的第二角度;
所述基于所述第一权重和所述第二权重,对所述第二朝向和所述第四朝向进行加权平均,得到所述第五朝向,包括:
基于所述第一权重和所述第二权重,对所述第一角度和所述第二角度进行加权平均得到第三角度,作为所述第五朝向。
本申请实施例中,摄像头朝向标定装置基于映射关系,将第二朝向和第四朝向分别映射为第一角度和第二角度。通过对第一角度和第二角度进行加权平均,得到第三角度,并将第三角度作为第五朝向,以提高待标注摄像头的朝向的精度。
结合本申请任一实施例,所述基于所述第一权重和所述第二权重,对所述第一角度和所述第二角度进行加权平均得到第三角度,包括:
将所述第一角度映射为参考圆上的第一点,所述第一角度与第二夹角相同,所述第二夹角为第一向量与直角坐标系的坐标轴之间的夹角,所述第一向量为所述参考圆的圆心指向所述第一点的向量,所述参考圆在所述直角坐标系中;
将所述第二角度映射为参考圆上的第二点,所述第二角度与第三夹角相同,所述第三夹角为第二向量与所述坐标轴之间的夹角,所述第二向量为所述圆心指向所述第二点的向量;
基于所述第一权重和所述第二权重,对所述第一点的坐标和所述第二点的坐标进行加权平均,得到第三点;
确定所述第三向量与所述坐标轴之间的夹角,得到所述第三角度,所述第三向量为所述圆心指向所述第三点的向量。
本申请实施例中,摄像头朝向标定装置可减少因上述误差出现的概率。
结合本申请任一实施例,所述第一待标定摄像头包括摄像头。
本申请实施例提供了一种摄像头朝向标定装置,该装置包括:
获取单元,配置为获取目标对象的图像序列,所述图像序列包括第一图像;
第一处理单元,配置为基于所述图像序列,得到所述目标对象在所述图像序列的采集过程中的移动方向;
第二处理单元,配置为确定所述第一图像中所述目标对象的第一朝向;
第三处理单元,配置为基于所述第一朝向和所述移动方向,得到第一待标定摄像头的第二朝向,所述第一待标定摄像头为采集所述第一图像的摄像头。
结合本申请任一实施例,所述第一朝向包括:所述目标对象的正面朝向所述第一待标定摄像头或所述目标对象的背面朝向所述第一待标定摄像头,所述第三处理单元,配置为:
在所述第一朝向为所述目标对象的正面朝向所述第一待标定摄像头的情况下,确定所述第二朝向与所述移动方向相反;
在所述第一朝向为所述目标对象的背面朝向所述第一待标定摄像头的情况下,确定所述第二朝向与所述移动方向相同。
结合本申请任一实施例,所述第三处理单元,配置为:
获取地图数据,所述地图数据包括目标道路,所述目标道路为所述目标对象移动时所在的道路;
基于所述图像序列,得到所述目标对象的第一轨迹;
基于所述第一轨迹和所述目标道路,得到所述目标对象的移动方向。
结合本申请任一实施例,所述图像序列中的图像由所述第一待标定摄像头采集得到;
所述第三处理单元,配置为:
基于所述目标对象在所述图像序列的像素坐标系下的位置,以及所述图像序列中的图像的采集时间,得到所述目标对象在所述图像序列中的轨迹,作为所述第一轨迹;
基于所述图像序列,确定所述第一轨迹和所述目标道路之间的第一夹角;
基于所述第一夹角和所述目标道路的走向,得到所述目标对象的移动方向。
结合本申请任一实施例,所述图像序列包括第一图像子序列和第二图像子序列,所述第一图像子序列中的图像由所述第一待标定摄像头采集得到,所述第二图像子序列中的图像由第二待标定摄像头采集得到;
所述第三处理单元,配置为:
基于所述第一图像子序列中的图像的采集时间、所述第二图像子序列中的图像的采集时间、所述第一待标定摄像头的位置和所述第二待标定摄像头的位置,得到所述目标对象在真实世界下的轨迹,作为所述第一轨迹;
结合本申请任一实施例,所述目标道路的走向包括第一方向和第二方向;
所述第三处理单元,配置为:
在确定所述第一轨迹和所述第一方向匹配的情况下,确定所述移动方向为所述第一方向;
在确定所述第一轨迹和所述第二方向匹配的情况下,确定所述移动方向为所述第二方向。
结合本申请任一实施例,所述第一图像为所述图像序列中时间戳最大的图像。
结合本申请任一实施例,所述图像序列还包括不同于所述第一图像的第二图像,所述第二图像由所述第一待标定摄像头采集得到,所述第三处理单元还配置为:
确定所述第二图像中的所述目标对象的第三朝向;
基于所述第三朝向和所述移动方向,得到所述第一待标定摄像头的第四朝向;
基于所述第二朝向和所述第四朝向,得到所述第一待标定摄像头的第五朝向。
结合本申请任一实施例,所述第三处理单元还配置为:
获取所述第二朝向的第一权重和所述第四朝向的第二权重;
基于所述第一权重和所述第二权重,对所述第二朝向和所述第四朝向进行加权平均,得到所述第五朝向。
结合本申请任一实施例,所述第三处理单元还配置为:
基于图像集中的至少一张图像确定所述第一待标定摄像头的朝向,得到朝向集,所述图像集包括所述图像序列中除所述第一图像和所述第二图像之外的图像;
确定所述朝向集中的第一目标朝向的数量,得到第一数量,所述第一目标朝向的朝向与所述第二朝向的朝向相同;
基于所述第一数量得到所述第一权重,所述第一权重和所述第一数量呈正相关。
结合本申请任一实施例,所述第三处理单元还配置为:
确定所述第一待标定摄像头的朝向集中的第二目标朝向的数量,得到第二数量,所述第二目标朝向的朝向与所述第四朝向的朝向相同;
基于所述第二数量得到所述第二权重,所述第二权重和所述第二数量呈正相关。
结合本申请任一实施例,所述获取单元,还配置为:在所述基于所述第一权重和所述第二权重,对所述第二朝向和所述第四朝向进行加权平均,得到所述第五朝向之前,获取朝向与方向角之间的映射关系;
所述第三处理单元还配置为:
基于所述映射关系确定与所述第二朝向之间具有映射关系的第一角度;
基于所述映射关系确定与所述第四朝向之间具有映射关系的第二角度;
基于所述第一权重和所述第二权重,对所述第一角度和所述第二角度进行加权平均得到第三角度,作为所述第五朝向。
结合本申请任一实施例,所述第三处理单元还配置为:
将所述第一角度映射为参考圆上的第一点,所述第一角度与第二夹角相同,所述第二夹角为第一向量与直角坐标系的坐标轴之间的夹角,所述第一向量为所述参考圆的圆心指向所述第一点的向量,所述参考圆在所述直角坐标系中;
将所述第二角度映射为参考圆上的第二点,所述第二角度与第三夹角相同,所述第三夹角为第二向量与所述坐标轴之间的夹角,所述第二向量为所述圆心指向所述第二点的向量;
基于所述第一权重和所述第二权重,对所述第一点的坐标和所述第二点的坐标进行加权平均,得到第三点;
确定所述第三向量与所述坐标轴之间的夹角,得到所述第三角度,所述第三向量为所述圆心指向所述第三点的向量。
结合本申请任一实施例,所述第一待标定摄像头包括摄像头。
本申请实施例提供了一种电子设备,其特征在于,包括:处理器和存储器,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,在所述处理器执行所述计算机指令的情况下,所述电子设备执行如上述第一方面及其任意一种可能实现的方式的方法。
本申请实施例提供了另一种电子设备,包括:处理器、发送装置、输入装置、输出装置和存储器,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,在所述处理器执行所述计算机指令的情况下,所述电子设备执行如上述第一方面及其任意一种可能实现的方式的方法。
本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序包括程序指令,在所述程序指令被处理器执行的情况下,使所述处理器执行如上述第一方面及其任意一种可能实现的方式的方法。
本申请实施例提供了一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在计算机上运行的情况下,使得所述计算机执行上述第一方面及其任一种可能的实现方式的方法。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本申请。
附图说明
为了更清楚地说明本申请实施例或背景技术中的技术方案,下面将对本申请实施例或背景技术中所需要使用的附图进行说明。
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本申请的实施例,并与说明书一起用于说明本申请实施例的技术方案。
图1A为本申请实施例提供的一种目标轴示意图;
图1B为本申请实施例提供的一种摄像头朝向标定方法的一种系统架构示意图;
图2为本申请实施例提供的另一种目标轴示意图;
图3为本申请实施例提供的一种像素坐标系示意图;
图4为本申请实施例提供的一种摄像头朝向标定方法的流程示意图;
图5为本申请实施例提供的一种方向坐标系示意图;
图6为本申请实施例提供的一种摄像头朝向标定装置的结构示意图;
图7为本申请实施例提供的一种摄像头朝向标定装置的硬件结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请实施例方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
应当理解,在本申请实施例中,“至少一个(项)”是指一个或者多个,“多个”是指两个或两个以上,“至少两个(项)”是指两个或三个及三个以上,“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:只存在A,只存在B以及同时存在A和B三种情况,其中A,B可以是单数或者复数。字符“/”可表示前后关联对象是一种“或”的关系,是指这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”,其中a,b,c可以是单个,也可以是多个。字符“/”还可表示数学运算中的除号,例如,a/b=a除以b;6/3=2。“以下至少一项(个)”或其类似表达。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
随着科技的发展,成像设备的应用场景越来越多。而成像设备中的摄像头的拍摄角度对拍摄得到高质量的图像至关重要。因此,如何标定摄像头的朝向具有非常重要的意义。
传统方法通过人工标定的方式,确定摄像头的朝向。在第一待标定摄像头的数量较大的情况下,这种方法需要耗费巨大的人力成本和时间成本。基于此,本申请实施例提供了一种摄像头朝向标定技术方案,以在不需要人工介入的情况下,完成摄像头朝向的标定。
在进行接下来的阐述之前,先对本文中的一些概念进行定义。
1、目标轴:从目标对象的正面看,目标对象的对称轴。例如,如图1A所示,假设目标对象的人体,目标轴为人体的垂直轴。又例如,如图2所示,假设目标对象为车辆,目标轴为从车辆正面看,车辆的对称轴。
2、图像中的目标对象的朝向:采集图像的成像设备的拍摄方向与目标轴的夹角(下文将称为参考夹角),且从目标对象的上方从上往下看,顺时针方向为参考夹角的正方向。
3、图像中的位置均指图像的像素坐标下的位置。本申请实施例中的像素坐标系的横坐标用于表示像素点所在的列数,像素坐标系下的纵坐标用于表示像素点所在的行数。例如,在图3所示的图像中,以图像的左上角为坐标原点O、平行于图像的行的方向为X轴的方向、平行于图像的列的方向为Y轴的方向,构建像素坐标系为XOY。横坐标和纵坐标的单位均为像素点。例如,图3中的像素点A 11的坐标为(1,1),像素点A 23的坐标为(3,2),像素点A 42的坐标为(2,4),像素点A 34的坐标为(4,3)。
为表述方便,下文将用[a,b]表示大于或等于a且小于或等于b的取值区间,用(c,d]表示大于c且小于或等于d的取值区间,用[e,f)表示大于或等于e且小于f的取值区间。
本申请实施例的执行主体为摄像头朝向标定装置,其中,摄像头朝向标定装置可以是任意一种可执行本申请方法实施例所公开的技术方案的电子设备。可选的,摄像头朝向标定装置可以是以下中的一种:手机、计算机、平板电脑、可穿戴智能设备。
应理解,本申请方法实施例还可以通过处理器执行计算机程序代码的方式实现。下面结合本申请实施例中的附图对本申请实施例进行描述。
请参阅图4,图4是本申请实施例提供的一种摄像头朝向标定方法的流程示意图。
401、获取目标对象的图像序列,上述图像序列包括第一图像。
本申请实施例中,目标对象可以是任意物体。在一种可能实现的方式中,目标对象包括以下中的一个:人体、人脸、车辆。
本申请实施例中,第一待标定摄像头可以是任意成像设备。例如,第一待标定摄像头可以是终端上的摄像头。又例如,第一待标定摄像头可以是摄像头。
本申请实施中,图像序列包括至少一张图像,且图像序列中的图像均包括目标对象,且图像序列中的图像均有第一待标定摄像头采集得到。
图像序列中的至少一张图像按采集时间的先后顺序排列。例如,图像序列包括图像a、图像b和图像c,其中,图像a的采集时间早于图像c的采集时间,图像c的采集时间早于图像b的采集时间。那么,图像序列中图像a、图像b和图像c的排列顺序为:图像a、图像c、图像b。
本步骤中,可以是图像序列中的所有图像由第一待标注摄像头采集得到,也可以是图像序列中的部分图像由第一待标注摄像头采集得到。
例如,图像序列包括图像a、图像b和图像c。在图像序列中的所有图像由第一待标注摄像头采集得到的情况下,图像a、图像b和图像c可以由第一待标注摄像头采集得到;在图像序列中的部分图像由第一待标注摄像头采集得到的情况下,可以是图像a和图像b由第一待标注摄像头采集得到,图像c由摄像头A采集得到。
可选的,第一待标定摄像头采集图像序列中的图像时的位置是固定的。例如,图像序列包括图像a和图像b,且图像a和图像b由第一待标注摄像头采集得到。那么第一待标定摄像头采集图像a时的位置与第一待标定摄像头采集图像b时的位置相同。
本申请实施例中,第一图像为图像序列中的任意一张图像。例如,图像序列包括图像a和图像b,那么第一图像可以是图像a,第一图像也可以是图像b。
在一种获取目标对象的图像序列的实现方式中,摄像头朝向标定装置接收用户通过输入组件输入的目标对象的图像序列获取目标对象的图像序列。上述输入组件包括:键盘、鼠标、触控屏、触控板和音频输入器等。
在另一种获取目标对象的图像序列的实现方式中,摄像头朝向标定装置接收终端发送的目标对象的图像序列获取目标对象的图像序列。上述终端可以是以下任意一种:手机、计算机、平板电脑、服务器。
在又一种获取目标对象的图像序列的实现方式中,第一待标定摄像头属于摄像头朝向标定装置。摄像头朝向标定装置通过第一待标定摄像头采集包含目标对象的视频流获取目标对象的图像序列。
在又一种获取目标对象的图像序列的实现方式中,摄像头朝向标定装置或第一待标定摄像头采集到的待处理视频流。摄像头朝向标定装置通过对待处理视频流中的图像进行目标对象检测处理,确定待处理视频流中包含目标对象的图像,得到目标对象的图像序列。
在又一种获取目标对象的图像序列的实现方式中,第一待标定摄像头属于摄像头朝向标定装置。摄像头朝向标定装置使用第一待标定摄像头采集得到待处理视频流。摄像头朝向标定装置通过对待处理视频流中的图像进行目标对象检测处理,确定待处理视频流中包含目标对象的图像,得到目标对象的图像序列。
402、基于上述图像序列,得到上述目标对象在上述图像序列的采集过程中的移动方向。
本申请实施例中,目标对象在图像序列的采集过程中的移动方向为,目标对象在真实世界下的移动方向。
在一种可能实现的方式中,摄像头朝向标定装置在执行步骤302之前,获取参考图像序列,其中,参考图像序列中的图像由参考摄像头采集得到,参考摄像头的在采集参考图像序列时的位置是固定的。将由参考图像序列和第一待标定摄像头采集到的图像序列构成的图像序列称为目标图像序列。摄像头朝向标定装置基于参考摄像头的位置、第一待标定摄像头的位置、参考摄像头采集参考图像序列的时间以及第一待标定摄像头采集图像序列的时间,得到在采集目标图像序列的过程中目标对象的轨迹。进而得到在采集目标图像序列的过程中目标对象的移动方向,作为在第一待标定摄像头采集图像序列的过程中目标对象的移动方向。
403、确定上述第一图像中上述目标对象的第一朝向。
本申请实施中,摄像头朝向标定装置通过对第一图像进行处理,得到目标对象的朝向,作为第一朝向。
在一种可能实现的方式中,摄像头朝向标定装置使用朝向模型对第一图像进行处理,得到第一朝向。朝向模型通过将已标注图像集作为训练数据,对神经网络进行训练得到,其中,已标注图像集中图像(下文将称为训练图像)均包含目标对象,训练图像的标签包括目标对象的朝向。
在神经网络的训练过程中,神经网络对训练图像进行处理,得到训练图像中的目标对象的朝向。基于该朝向和训练图像的标签得到第一训练损失。基于第一训练损失更新神经网络的参数,得到朝向模型。
404、基于上述第一朝向和上述移动方向,得到第一待标定摄像头的第二朝向,上述第一待标定摄像头为采集上述第一图像的摄像头。
本实施例中,第一图像由第一待标定摄像头采集得到。第一朝向即为目标对象的移动方向与第一待标定摄像头的拍摄方向之间的夹角。第二朝向为第一待标定摄像头的拍摄方向。
在一种可能实现的方式中,摄像头朝向标定装置基于第一朝向,得到第一待标定摄像头的拍摄方向与目标对象的移动方向的夹角(下文将称为目标夹角)。基于目标夹角与移动方向,得到第二朝向。
例如,假设第一朝向为60度,目标对象的移动方向为东偏南30度。那么第一待标定摄像头的拍摄方向与目标对象的移动方向的夹角为180度-60度=120度。第二朝向为正北方向,即第一待标定摄像头的拍摄方向为正北方向。
本申请实施例中,摄像头朝向标定装置基于目标对象在图像序列中的轨迹,得到目标对象的移动方向。进而基于目标对象在第一图像中的第一朝向和该移动方向,得到第一待标定摄像头的朝向。从而在不需要人力介入的情况下,完成对第一待标定摄像头的朝向的标定,由此减少人力成本和时间成本。
图1B可以应用本申请实施例一种摄像头朝向标定方法的一种系统架构示意图;如图1B所示,该系统架构中包括:图像采集设备2001、网络2002和图像获取终端2003。为实现支撑一个示例性应用,图像采集设备2001和图像获取终端2003可以通过网络2002建立通信连接,图像采集设备2001通过网络2002向图像获取终端2003传输采集的图像,图像获取终端2003接收图像,并对图像进行分析,进而基于图像中目标对象的第一朝向和移动方向,确定待标定摄像头的第二朝向。
作为示例,当前场景图像采集设备2001可以包括摄像头等图像采集设备。图像获取终端2003可以包括具有一定计算能力的计算机设备,该计算机设备例如包括:终端设备或服务器或其它处理设备。网络2002可以采用有线连接或无线连接方式。其中,当图像采集设备2001为图像采集设备,图像获取终端2003为服务器时,图像采集设备可以通过有线连接的方式与图像获取终端通信连接,例如通过总线进行数据通信;当图像采集设备2001图像采集设备,图像获取终端2003为终端设备时,图像采集设备可以通过无线连接的方式与图像获取终端通信连接,进而进行数据通信。
或者,在一些场景中,图像获取终端2003可以是带有视频采集模组的视觉处理设备,可以是带有摄像头的主机。这时,本申请实施例的信息处理方法可以由图像获取终端2003执行,上述系统架构可以不包含网络2002和图像采集设备2001。
作为一种可选的实施方式,上述第一朝向包括:上述目标对象的正面朝向上述第一待标定摄像头或上述目标对象的背面朝向上述第一待标定摄像头。
在本实施方式中,在第一朝向所对应的参考夹角处于[0°,90°]内的情况下,目标对象朝向为正面朝向第一待标定摄像头;在第一朝向所对应的参考夹角处于[270°,360°)内的情况下,目标对象朝向为正面朝向第一待标定摄像头;在第一朝向所对应的参考夹角处于(90°,2700°)内的情况下,目标对象朝向为背面朝向第一待标定摄像头。
在本实施方式中,摄像头朝向标定装置在执行步骤404的过程中执行以下步骤:
1、在上述第一朝向为上述目标对象的正面朝向上述第一待标定摄像头的情况下,确定上述第二朝向与上述移动方向相反。
第一朝向为目标对象的正面朝向第一待标定摄像头,说明目标对象朝摄像头移动。因此,摄像头朝向标定装置确定第一待标定摄像头的朝向与移动方向相反,即第二朝向与移动方向相反。
2、在上述第一朝向为上述目标对象的背面朝向上述第一待标定摄像头的情况下,确定上述第二朝向与上述移动方向相同。
第一朝向为目标对象的背面朝向第一待标定摄像头,说明目标对象背朝摄像头移动。因此,摄像头朝向标定装置确定第一待标定摄像头的朝向与移动方向相同,即第二朝向与移动方向相同。
本实施例中,第一朝向包括两种情况下,摄像头朝向标定装置基于第一朝向的参考夹角确定第一朝向,从而可在基于第一朝向和移动方向确定第一待标定摄像头的朝向时,减少数据处理量。
作为一种可选的实施方式,上述图像序列中的图像均包含时间戳。摄像头朝向标定装置在执行步骤402的过程中执行以下步骤:
3、获取地图数据,上述地图数据包括目标道路,上述目标道路为上述目标对象移动时所在的道路。即在第一待标定摄像头采集图像序列的过程中,目标对象在目标对象上移动。
本申请实施例中,目标道路为目标对象移动时所在的道路。在地图数据包括目标道路的情况下,摄像头朝向标定装置基于地图数据,可得到目标道路的信息。如,目标道路的信息包括以下中的一种或大于一种:目标道路的宽度、目标道路的长度、目标道路的位置、目标道路的走向。其中,目标道路的走向包括至少两个方向。例如,假设目标道路的走向为南北走向,那么该走向包括:从南到北和从北到南两个方向。又例如,假设目标道路的走向为北偏西30度与东偏南30度的走向,那么该走向包括:北偏 西30度和东偏南30度两个方向。
在一种获取地图数据的实现方式中,摄像头朝向标定装置接收用户通过输入组件输入的地图数据获取地图数据。
在另一种获取地图数据的实现方式中,摄像头朝向标定装置接收终端发送的地图数据获取地图数据。
4、基于上述图像序列,得到上述目标对象的第一轨迹。
在一种可能实现的方式中,摄像头朝向标定装置基于目标对象在图像序列中的位置,以及图像序列的采集时间,得到目标对象在图像序列中的轨迹。
例如,图像序列包括图像a和图像b,其中,图像a的采集时间为t1,图像b的采集时间为t2,且t1早于t2。在图像a中,目标对象的位置为(3,4)。在图像b中,目标对象的位置为(5,4)。那么目标对象在图像序列中的轨迹为,t1时刻目标对象位于(3,4)处,t2时刻目标对象位于(5,4)处。
在另一种可能实现的方式中,摄像头朝向标定装置在执行步骤4之前,获取参考图像序列,其中,参考图像序列中的图像由参考摄像头采集得到,参考摄像头的在采集参考图像序列时的位置是固定的。将由参考图像序列和第一待标定摄像头采集到的图像序列构成的图像序列称为目标图像序列。摄像头朝向标定装置基于参考摄像头的位置、第一待标定摄像头的位置、参考摄像头采集参考图像序列的时间以及第一待标定摄像头采集图像序列的时间,得到在采集目标图像序列的过程中目标对象的轨迹,作为在第一待标定摄像头在采集图像序列的过程中目标对象的轨迹。
5、基于上述第一轨迹和上述目标道路,得到上述目标对象的移动方向。
在一种可能实现的方式中,摄像头朝向标定装置基于轨迹,可得到目标对象在图像序列中的移动方向(下文将称为参考移动方向)。确定参考移动方向和目标道路的走向之间的夹角,并基于该夹角和目标道路的走向,得到目标对象的移动方向。
例如,在步骤4中的示例中,摄像头朝向标定装置基于轨迹,可确定目标对象在图像序列中的移动方向为像素坐标系的横轴正方向。
假设目标道路的走向为南北方向,即目标道路的走向包括从正南到正北的方向(下文称为方向一)和从正北到正南的方向(下文称为方向二)。若确定在图像序列中,目标道路的走向与参考移动方向之间的夹角为,参考移动方向逆时针旋转60度与方向一重合。即在真实世界中,目标对象的移动方向逆时针旋转60度与目标道路的走向重合。假设目标道路的走向为南北方向,摄像头朝向标定装置,可确定目标对象的移动方向为北偏东60度。
作为一种可选的实施方式,图像序列中的图像由第一待标定摄像头采集得到。在该种实施方式中,摄像头朝向标定装置在执行步骤4的过程中执行以下步骤:
6、基于上述目标对象在上述图像序列的像素坐标系下的位置,以及上述图像序列中的图像的采集时间,得到上述目标对象在上述图像序列中的轨迹,作为上述第一轨迹。
本步骤中,第一待标定摄像头在采集图像序列时的位置是固定的。此时,图像序列的像素坐标系为图像序列中任意一张图像的像素坐标系。
摄像头朝向标定装置通过执行步骤6,可得到目标对象在图像序列中的轨迹,即目标对象在图像序列的像素坐标系下的轨迹,即第二轨迹。摄像头并将第二轨迹作为第一轨迹,即第一轨迹为目标对象在图像序列的像素坐标系下的轨迹。
例如,图像序列包括图像a和图像b,其中,图像a的采集时间为t1,图像b的采集时间为t2,且t1早于t2。在图像a中,目标对象的位置为(3,4)。在图像b中,目标对象的位置为(5,4)。那么目标对象在图像序列中的轨迹(即第二轨迹)为,t1时刻目标对象位于(3,4)处,t2时刻目标对象位于(5,4)处。
在执行步骤6的基础上,摄像头朝向标定装置在执行步骤5的过程中执行以下步骤:
7、基于上述图像序列,确定上述第一轨迹和上述目标道路之间的第一夹角。
在一种可能实现的方式中,摄像头朝向标定装置对图像序列进行道路检测处理,确定目标道路在图像序列的像素坐标系下的位置。基于第一轨迹与目标道路的位置,得到第一轨迹和目标道路之间的夹角(即第二轨迹和目标道路之间的夹角),即第一夹角。
8、基于上述第一夹角和上述目标道路的走向,得到上述目标对象的移动方向。
摄像头朝向标定装置基于地图数据,可确定目标道路的走向。基于第一夹角,可从目标道路的走向的方向中确定与第一轨迹匹配的方向,作为目标对象的移动方向。
例如,假设目标道路的走向为南北走向,此时,目标道路的走向包括从南到北和从北到南两个方向。假设第一方向为从南到北,第二方向为从北到南。摄像头朝向标定装置通过执行步骤7,确定在图像序列的像素坐标系下,第一轨迹和第一方向之间的夹角为第一夹角。若第一夹角处于[0°,90°]之间,摄像头朝向标定装置确定第一轨迹与第一方向匹配,从而确定目标对象的移动方向为南到北;若第一夹角处 于(90°,180°)之间,摄像头朝向标定装置确定第一轨迹与第二方向匹配,从而确定目标对象的移动方向为北到南。
在一种可能的应用场景下,A地区包括100个摄像头,这100个摄像头均与摄像头管理中心的服务器之间具有通信连接,服务器通过该通信连接可获取这100个摄像头的采集到的视频流。
现A地区的工作人员想要标定100个摄像头中的B摄像头的朝向。那么,工作人员可通过服务器获取B摄像头采集到的图像序列C,其中,该图像序列C包含目标对象(如图像序列C包含张三)。服务器进而基于该种实施方式所提供的技术方案,对图像序列C进行处理,即将待处理图像序列C作为该种实施方式中的图像序列、将B摄像头作为第一待标注摄像头,得到B摄像头的朝向。
作为一种可选的实施方式,图像序列包括第一图像子序列和第二图像子序列,第一图像子序列中的图像由第一待标定摄像头采集得到,第二图像子序列中的图像由第二待标定摄像头采集得到。
第一待标定摄像头与第二待标定摄像头不同,且第二待标定摄像头在采集第二图像子序列时的位置与第一待标定摄像头采集第一图像子序列时的位置不同。其中,第一待标定摄像头的位置为,第一待标定摄像头在真实世界下的位置。第二待标定摄像头的位置为,第二待标定摄像头在真实世界下的位置。
例如,图像序列包括图像a、图像b、图像c和图像d,其中,图像a和图像b属于第一图像子序列,图像c和图像d属于第二图像子序列。此时,图像a和图像b由第一待标定摄像头采集得到,图像c和图像d由第二待标定摄像头采集得到。
若第一待标定摄像头采集图像a时的位置和第一待标定摄像头采集图像b时的位置均为位置1,第二待标定摄像头采集图像c时的位置和第二待标定摄像头采集图像d时的位置均为位置2。那么,位置1和位置2不同。
如步骤3上述,目标道路的走向包括两个方向。在该种实施方式中,目标道路的走向包括第一方向和第二方向。摄像头在执行步骤4的过程中执行以下步骤:
9、基于上述第一图像子序列中的图像的采集时间、上述第二图像子序列中的图像的采集时间、上述第一待标定摄像头的位置和上述第二待标定摄像头的位置,得到上述目标对象在真实世界下的轨迹,作为上述第一轨迹。
摄像头朝向标定装置基于第一图像子序列中的图像的采集时间和第二图像子序列中的图像的采集时间,可确定采集第一图像子序列和采集第二图像子序列的先后顺序,进而可确定目标对象经过第一待标定摄像头所在的位置和经过第二待标定摄像头所在的位置的先后顺序。这样,摄像头朝向标定装置可基于第一待标定摄像头的位置和第二待标定摄像头的位置,得到目标对象在真实世界下的轨迹,即第三轨迹。在步骤9中,摄像头朝向标定装置将第三轨迹作为第一轨迹,即第一轨迹为目标对象在真实世界下的轨迹。
例如,假设第一待标定摄像头的位置为位置1,第二待标定摄像头的位置为位置2,第一图像子序列中的最晚图像的采集时间为t1,第二图像子序列中的最晚图像的采集时间为t2,其中,最晚图像为图像子序列中采集时间最晚的图像。若t1早于t2,摄像头朝向标定装置得到第三轨迹,其中,第三轨迹包括目标对象在t1出现在位置1,并经过移动,在t2出现在位置2。
应理解,本实施方式中的第一待标定摄像头和第二待标定摄像头仅为示例,不应理解为图像序列中的图像仅通过两个摄像头采集得到。在实际应用中,图像序列中除由第一标定摄像头采集的图像之外,还包括由一个摄像头采集的图像或由大于一个摄像头采集的图像。
例如,图像序列包括图像a、图像b、图像c、图像d、图像e和图像f,其中,图像a和图像b由第一待标定摄像头采集得到,图像c和图像d由第二待标定摄像头采集得到,图像e和图像f由第三待标定摄像头采集得到。
在图像序列中除由第一标定摄像头采集的图像之外,还包括由一个摄像头采集的图像或由大于一个摄像头采集的图像的情况下,将采集图像序列中的图像的摄像头称为目标摄像头群,摄像头朝向标定装置可基于目标摄像头群中每个摄像头的位置和每个摄像头采集图像的时间,得到第三轨迹。此时,目标摄像头群中的每个摄像头在采集图像序列中的图像时的位置均为固定的,且任意两个摄像头的位置均不同。
例如,图像序列包括图像a、图像b、图像c、图像d、图像e和图像f,其中,图像a和图像b属于第一图像子序列,图像c和图像d属于第二图像子序列,图像e和图像f属于第三图像子序列。第一图像子序列由第一待标定摄像头采集得到,第二图像子序列由第二待标定摄像头采集得到,第三图像子序列由第三待标定摄像头采集得到。
假设第一待标定摄像头采集第一图像子序列时的位置为位置1,第二待标定摄像头采集第二图像子序列时的位置为位置2,第三待标定摄像头采集第三图像子序列时的位置为位置3,且,位置1和位置2不同、位置2和位置3不同、位置1和位置3不同。
第一图像子序列中的最晚图像的采集时间为t1,第二图像子序列中的最晚图像的采集时间为t2,第三图像子序列中的最晚图像的采集时间为t3,其中,最晚图像为图像子序列中采集时间最晚的图像。若t1早于t2,t2早于t3,摄像头朝向标定装置得到第三轨迹,其中,第三轨迹包括目标对象在t1出现在位置1,并经过移动,在t2出现在位置2,经过移动,在t3出现在位置3。
同理,在目标摄像头群包括4个摄像头、5个摄像头、…、m个摄像头的情况下,摄像头朝向标定装置可基于目标摄像头群中每个摄像头的位置和每个摄像头采集图像的时间,得到第三轨迹,其中,m为大于5的正整数。
摄像头朝向标定装置在执行步骤9的基础上,在执行步骤5的过程中执行以下步骤:
10、在确定上述第一轨迹和上述第一方向匹配的情况下,确定上述移动方向为上述第一方向。
11、在确定上述第一轨迹和上述第二方向匹配的情况下,确定上述移动方向为上述第二方向。
在步骤10和步骤11中,第一轨迹和第一方向匹配指第一轨迹方向与第一方向匹配,第一轨迹和第二方向匹配指第一轨迹方向与第二方向匹配。
在一种可能实现的方式中,目标道路的走向包括:南北走向和东西走向,其中,南北走向包括:从南到北的方向和从北到南的方向,东西走向包括:从东到西的方向和从西到东的方向。在目标道路的走向为南北走向时,第一方向可以是从南到北的方向,第二方向则是从北到南的方向;在目标道路的走向为南北走向时,第一方向也可以是从北到南的方向,第二方向则是从南到北的方向。
在目标道路的走向为南北走向,且第一轨迹方向朝南(此时的朝南包括:正南、东偏南、西偏南)的情况下,确定第一轨迹与从北到南的方向匹配,摄像头朝向标定装置确定目标对象的移动方向为正南;在目标道路的走向为南北走向,且第一轨迹方向朝北(此时的朝南包括:正北、东偏北、西偏北)的情况下,确定第一轨迹与从南到北的方向匹配,摄像头朝向标定装置确定目标对象的移动方向为正北;在目标道路的走向为东西走向,且第一轨迹方向朝东(此时的朝南包括:正东、北偏东、南偏东)的情况下,确定第一轨迹与从西到东的方向匹配,摄像头朝向标定装置确定目标对象的移动方向为正东;在目标道路的走向为东西走向,且第一轨迹方向朝西(此时的朝南包括:正西、北偏西、南偏西)的情况下,确定第一轨迹与从东到西的方向匹配,摄像头朝向标定装置确定目标对象的移动方向为正西。
摄像头朝向标定装置通过步骤10和步骤11确定目标对象的移动方向,可减少数据处理量,提高处理速度。
在一种可能的应用场景下,A地区包括100个摄像头,这100个摄像头均与摄像头管理中心的服务器之间具有通信连接,服务器通过该通信连接可获取这100个摄像头的采集到的视频流。
现A地区的工作人员想要标定100个摄像头中的B摄像头的朝向。那么,工作人员可通过服务器获取B摄像头采集到的图像序列C和100摄像头中的D摄像头采集到的图像序列E,其中,该图像序列C和图像序列E均包含目标对象(如图像序列C和图像序列E均包含张三)。
服务器将图像序列C作为第一图像子序列、将图像序列E作为第二图像子序列,得到图像序列F。服务器进而基于该种实施方式所提供的技术方案,对图像序列F进行处理,即将待处理图像序列F作为该种实施方式中的图像序列、将B摄像头作为第一待标注摄像头,得到B摄像头的朝向。
作为一种可选的实施方式,上述第一图像为上述图像序列中时间戳最大的图像。
由于目标对象在离开第一待标定摄像头所拍摄的范围时的朝向与目标对象的移动方向的匹配度最高,而第一待标定摄像头的朝向基于第一朝向和目标对象的移动方向得到,摄像头朝向标定装置通过从图像序列中选取时间戳最大的图像作为第一图像,可提高第一朝向与目标对象的匹配度,从而提高第一待标定摄像头的朝向的准确度。
作为一种可选的实施方式,上述图像序列还包括不同于上述第一图像的第二图像,第二图像由第一待标定摄像头采集得到。摄像头朝向标定装置还执行以下步骤:
12、确定上述第二图像中的上述目标对象的第三朝向。
本步骤的实现方式可参见步骤1,其中,第一图像与第二图像对应,第一朝向与第三朝向对应。
13、基于上述第三朝向和上述移动方向,得到上述第一待标定摄像头的第四朝向。
本步骤的实现方式可参见步骤404,其中,第一朝向与第三朝向对应,第二朝向与第四朝向对应。即第二朝向为摄像头朝向标定装置基于第一图像确定的第一待标定摄像头的朝向,第四朝向为摄像头朝向标定装置基于第二图像确定的第一待标定摄像头的朝向。
14、基于上述第二朝向和上述第四朝向,得到上述第一待标定摄像头的第五朝向。
由于在图像序列的不同时刻,目标对象的移动方向和朝向均可能不同,本步骤中,摄像头朝向标定装置基于第二朝向和第四朝向,得到第一待标定摄像头的第五朝向,可提高第一待标定摄像头的朝向的准确度。
在一种可能实现的方式中,摄像头朝向标定装置对第二朝向和第四朝向进行平均,得到第五朝向。
在步骤12~步骤14中,摄像头朝向标定装置首先基于第二图像确定第四朝向,再基于第二朝向和第四朝向得到第五朝向,可提高第一待标定摄像头的朝向的准确度。
作为一种可选的实施方式,摄像头朝向标定装置在执行步骤14的过程中执行以下步骤:
15、获取上述第二朝向的第一权重和上述第四朝向的第二权重。
在一种可能实现的方式中,第一图像的时间戳越大,第一权重的值就越大;第二图像的时间戳越大,第二权重的值就越大。
由于目标对象在离开第一待标定摄像头所拍摄的范围时的朝向与目标对象的移动方向的匹配度最高,基于图像序列中时间戳越大的图像得到的第一待标定摄像头的准确度越高。因此,在该种实现方式中,摄像头朝向标定装置基于图像的时间戳,确定图像所对应的朝向的权重,可在基于朝向和朝向的权重得到第一待标定摄像头的朝向,可提高第一待标定摄像头的权重。
16、基于上述第一权重和上述第二权重,对上述第二朝向和上述第四朝向进行加权平均,得到上述第五朝向。
应理解,在实际处理中,摄像头朝向标定装置可分别基于图像序列中的每张图像确定一个朝向,并分别为每个朝向确定一个权重。通过对所有朝向进行加权平均得到第一待标定摄像头的朝向。
作为一种可选的实施方式,摄像头朝向标定装置在执行步骤15的过程中执行以下步骤:
17、基于图像集中的至少一张图像确定上述第一待标定摄像头的朝向,得到朝向集,上述图像集包括上述图像序列中除上述第一图像和上述第二图像之外的图像。
例如,图像序列包括第一图像、第二图像、第三图像、第四图像。图像集可以包括第三图像,图像集也可以包括第四图像,图像集还可以包括第三图像和第四图像。
在图像集包括第三图像的情况下,摄像头朝向标定装置基于第三图像得到第一待标定摄像头的第六朝向,此时朝向集包括第六朝向;在图像集包括第四图像的情况下,摄像头朝向标定装置基于第四图像得到第一待标定摄像头的第七朝向,此时朝向集包括第七朝向;在图像集包括第三图像和第四图像的情况下,摄像头朝向标定装置基于第三图像得到第一待标定摄像头的第六朝向,并基于第四图像得到第一待标定摄像头的第七朝向,此时朝向集包括第六朝向和第七朝向。
18、确定上述朝向集中的第一目标朝向的数量,得到第一数量,上述第一目标朝向的朝向与上述第二朝向的朝向相同。
本步骤中,在朝向为角度的情况下,第一目标朝向的角度与第二朝向的角度相同。例如,第二朝向为东偏南60度,那么第一目标朝向为东偏南60度。
在朝向为以下中的一个的情况下:正东、正南、正北、正西,第一目标朝向的朝向与第二朝向的朝向相同。例如,若第二朝向为正东,那么第一目标朝向为正东;若第二朝向为正北,那么第一目标朝向为正北。
19、确定上述朝向集中的第二目标朝向的数量,得到第二数量,上述第二目标朝向的朝向与上述第四朝向的朝向相同。
本步骤中,在朝向为角度的情况下,第二目标朝向的角度与第四朝向的角度相同。例如,第四朝向为东偏南60度,那么第二目标朝向为东偏南60度。
在朝向为以下中的一个的情况下:正东、正南、正北、正西,第二目标朝向的朝向与第四朝向的朝向相同。例如,若第四朝向为正东,那么第二目标朝向为正东;若第四朝向为正北,那么第二目标朝向为正北。
20、基于上述第一数量得到上述第一权重,基于上述第二数量得到上述第二权重,上述第一权重和上述第一数量呈正相关,上述第二权重和上述第二数量呈正相关。
本步骤中,将第一数量与第一权重的比值称为第一比值,将第二数量与第二权重称为第二比值,第一比值与第二比值相同。
假设第一数量为N 1,第二数量为N 2,第一权重为W 1,第二权重为W 2,在一种可能实现的方式中,N 1,N 2,W 1,W 2满足下式:
Figure PCTCN2021102931-appb-000001
其中,k为正数。可选的,k=1。
在另一种可能实现的方式中,N 1,N 2,W 1,W 2满足下式:
Figure PCTCN2021102931-appb-000002
其中,k为正数,c为非负数。可选的,k=1,c=0。
作为一种可选的实施方式,摄像头朝向标定装置在执行步骤16之前,还执行以下步骤:
21、获取朝向与方向角之间的映射关系。
本步骤中,方向角指与方向坐标系中的正北方向之间的夹角,其中,方向坐标系可参见图5。
在一种可能实现的方式中,朝向包括正南、正北、正东、正西。在上述映射关系中,正北与0度之间具有映射关系,正西与90度之间具有映射关系,正南与180度之间具有映射关系,正东与270度之间具有映射关系。
22、基于上述映射关系确定与上述第二朝向之间具有映射关系的第一角度。
23、基于上述映射关系确定与上述第四朝向之间具有映射关系的第二角度。
在得到第一角度和第二角度后,摄像头朝向标定装置在执行步骤16的过程中执行以下步骤:
24、基于上述第一权重和上述第二权重,对上述第一角度和上述第二角度进行加权平均得到第三角度,作为上述第五朝向。
例如,假设第一权重为10,第一角度为90度,第二权重为15,第二角度为180度。那么第三角度=(10×90°+15×180°)/(10+15)=144°。
在该种实施方式中,摄像头朝向标定装置基于映射关系,将第二朝向和第四朝向分别映射为第一角度和第二角度。通过对第一角度和第二角度进行加权平均,得到第三角度,并将第三角度作为第五朝向,以提高待标注摄像头的朝向的精度。
例如,若基于图像序列中的图像得到的待标注摄像头的朝向为正南、正北、正东、正西中的一个,但通过执行步骤21~步骤24,可将待标注摄像头的朝向精确到其他角度。如在步骤24的示例中,第二朝向为正西,第四朝向为正南,得到的第五朝向为144°。
作为一种可选的实施方式,在步骤24中,摄像头朝向标定装置通过执行以下步骤得到第三角度:
25、将上述第一角度映射为参考圆上的第一点,上述第一角度与第二夹角相同,上述第二夹角为第一向量与直角坐标系的坐标轴之间的夹角,上述第一向量为上述参考圆的圆心指向上述第一点的向量,上述参考圆在上述直角坐标系中。
本步骤中,坐标轴可以是直角坐标系的横轴,坐标轴也可以是直角坐标系的纵轴。假设第一角度为θ,在一种可能实现的方式中,坐标轴为直角坐标系的横轴,参考圆的圆心为0,半径为1。第一点的坐标为(sinθ,cosθ)。
26、将上述第二角度映射为参考圆上的第二点,上述第二角度与第三夹角相同,上述第三夹角为第二向量与上述坐标轴之间的夹角,上述第二向量为上述圆心指向上述第二点的向量。
27、基于上述第一权重和上述第二权重,对上述第一点的坐标和上述第二点的坐标进行加权平均,得到第三点。
摄像头朝向标定装置通过执行步骤27,可得到参考圆上的第三点。例如,假设第一角度为90度,第二角度为180度,那么第一点的坐标为(1,0),第二点的坐标为(0,1)。若第一权重为10,第二权重为15,那么第三点的坐标为(10/25,15/25)=(0.4,0.6)。
28、确定第三向量与坐标轴之间的夹角,得到上述第三角度,上述第三向量为上述圆心指向上述第三点的向量。
例如,在步骤27的示例中,第三向量为(0.4,0.6)。若坐标轴为直角坐标系的横轴,第三角度为α,那么α满足下式:(sin -10.4,cos -10.6)。
由于0°和360°相同,若直接对角度进行加权平均,易导致较大的误差。例如,假设第一角度为0°,第二角度为359°,第一权重和第二权重均为1。对第一角度和第二角度进行加权平均得到的第三角度为179.5°。即第三角度的朝向接近正南,但第一角度的朝向和第二角度的朝向均接近正北,这显然存在较大的误差。摄像头朝向标定装置通过执行步骤25~步骤28,可减少因上述误差出现的概率。
基于本申请实施例提供的技师方案,本申请实施例还提供了一种可能的应用场景。
目前,为了增强工作、生活或者社会环境中的安全性,会在各个区域场所内安装摄像头,以便根据摄像头采集到的视频流信息进行安全防护,如从视频流确定目标人物的行踪。
为提高根据视频流进行安全防护的效果,可确定摄像头的朝向。随着公共场所内摄像头数量的快速增长,如何高效、准确的确定大量摄像头的朝向具有非常重要的意义。
例如,A地的管理人员希望标定A地区的摄像头。但由于A地区的摄像头的数量较多,若通过人工标定的方式标定摄像头的朝向,会带来较大的人工成本,且标定效率较低。基于本申请实施例提供的 技术方案,可降低标定摄像头的朝向所耗费的人工成本,提高标定效率。
例如,管理人员在服务器(即上述摄像头朝向标定装置)和A地区所有摄像头建立通信连接。通过该通信连接,服务器可与A地区任意一个摄像头进行数据传输。
服务器在第一时间通过该通信连接获取A地区的摄像头采集的第一待处理视频流。服务器从第一待处理视频流中确定采集时间为预设时间内的视频流,得到第二待处理视频流。可选的,预设时间为1分钟。
例如,服务器在2021年3月2日9点13分2秒获取到第一待处理视频流,即第一时间为2021年3月2日9点13分2秒。服务器从第一待处理视频流选取采集时间在2021年3月2日9点12分2秒~2021年3月2日9点13分2秒之间的视频流,作为第二待处理视频流。
服务器对第二待处理视频流进行人体检测处理,从第二待处理视频流中选取包含人体的图像,得到第一待处理图像集。服务器对第一待处理图像集中的图像进行人体聚类处理,以从第一待处理图像集中确定包含相同人体的至少一张图像,得到第二待处理图像集,其中,人体聚类处理指基于图像的人体特征之间的相似度进行聚类。
例如,第一待处理图像集包括:图像a、图像b、图像c、图像d。服务器通过对第一待处理图像集进行人体聚类处理,确定图像a包含的人体和图像b包含的人体相同,图像c包含的人体和图像d包含的人体相同。此时,图像a和图像b组成的图像集为第二待处理图像集,图像c和图像d组成的图像集也为第二待处理图像集。
服务器基于第二待处理图像集,得到第二待处理图像集中的人物(即上述目标对象为人物,下文将第二待处理图像集中的人物称为第一目标人物)的轨迹。例如(例1),第二待处理图像集包含图像a和图像b,其中,图像a和图像b均包含张三,图像a由第一摄像头采集,图像b由第二摄像头采集,图像a的采集时间为t1,图像b的采集时间为t2,且t1早于t2。服务器进而可确定张三在t1时刻位于第一摄像头的位置,张三在t2时刻位于第二摄像头的位置。
又例如(例2),第二待处理图像集包含图像a、图像b、图像c,其中,图像a、图像b、图像c均包含张三,图像a由第一摄像头采集,图像b和图像c由第二摄像头采集,图像a的采集时间为t1,图像b的采集时间为t2,图像c的采集时间为t3,且t1早于t2,t2早于t3。服务器进而可确定张三在t1时刻位于第一摄像头的位置,张三在t2时刻和t3时刻位于第二摄像头的位置。
服务器获取地图数据,并基于该地图数据确定第一目标人物所在的第一目标道路的走向。服务器基于第一目标人物的移动方向和第一目标道路的走向,确定第一目标人物的移动方向。例如,假设在例1中,第二摄像头位于第一摄像头的东北方向。若第一目标道路的走向为南北方向,则张三的移动方向为正北方向。
又例如,假设在例2中,第二摄像头位于第一摄像头的西南方向。若第一目标道路的走向为南北方向,那么张三的移动方向为正南方向。
应理解,若将第二摄像头作为第一待标定摄像头,那么图像序列包括图像b和图像c。在该示例中,张三在采集图像序列的过程中的移动方向也为正南方向。
服务器确定第二待处理图像集中时间戳最大的图像,得到第一待处理图像。服务器确定在第一待处理图像中第一目标人物的朝向。服务器基于第一目标人物的朝向和第一目标人物的移动方向,得到摄像头的第一朝向。
例如,在第一待处理图像中第一目标人物的朝向为正面朝向监控摄像头,第一目标人物的移动方向为正北,那么摄像头的第一朝向为正南。
即服务器基于在第一时间获取到的第一待处理视频流,可得到摄像头的第一朝向。可选的,服务器基于第二时间获取到的摄像头采集到的视频流,可得到摄像头的第二朝向,…,服务器基于第n时间获取到的摄像头采集到的视频流,可得到摄像头的第n朝向。上述第一时间、第二时间,…,第n时间中两两均不同。
服务器基于下表分别确定第一朝向,第二朝向,…,第n朝向所对应的角度。
摄像头朝向 角度
正北
正西 90°
正南 180°
正东 270°
表1
服务器确定角度为0°的朝向的数量,得到N1。服务器确定角度为90°的朝向的数量,得到N2。服务器确定角度为180°的朝向的数量,得到N3。服务器确定角度为270°的朝向的数量,得到N4。
服务器将0°映射到圆心为坐标原点,半径为1的参考圆上,例如,假设朝向所对应的角度为θ,θ在参考圆上的坐标为:(sinθ,cosθ)。这样,服务器将0°映射至参考圆上得到的点的坐标为(0,1),服务器将90°映射至参考圆上得到的点的坐标为(1,0),服务器将180°映射至参考圆上得到的点的坐标为(0,-1),服务器将270°映射至参考圆上得到的点的坐标为(-1,0)。
服务器基于下式确定摄像头的朝向所对应的角度:
Figure PCTCN2021102931-appb-000003
服务器基于β和表1,确定摄像头的朝向。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的执行顺序应当以其功能和可能的内在逻辑确定。
上述详细阐述了本申请实施例的方法,下面提供了本申请实施例的装置。
请参阅图6,图6为本申请实施例提供的一种摄像头朝向标定装置的结构示意图,该装置1包括:获取单元11、第一处理单元12、第二处理单元13、第三处理单元14,其中:
获取单元11,配置为获取目标对象的图像序列,所述图像序列包括第一图像;
第一处理单元12,配置为依据所述图像序列,得到所述目标对象在所述图像序列的采集过程中的移动方向;
第二处理单元13,配置为确定所述第一图像中所述目标对象的第一朝向;
第三处理单元14,配置为依据所述第一朝向和所述移动方向,得到第一待标定摄像头的第二朝向,所述第一待标定摄像头为采集所述第一图像的摄像头。
结合本申请任一实施例,所述第一朝向包括:所述目标对象的正面朝向所述第一待标定摄像头或所述目标对象的背面朝向所述第一待标定摄像头,所述第三处理单元14,配置为:
在所述第一朝向为所述目标对象的正面朝向所述第一待标定摄像头的情况下,确定所述第二朝向与所述移动方向相反;
在所述第一朝向为所述目标对象的背面朝向所述第一待标定摄像头的情况下,确定所述第二朝向与所述移动方向相同。
结合本申请任一实施例,所述第三处理单元14,配置为:
获取地图数据,所述地图数据包括目标道路,所述目标道路为所述目标对象移动时所在的道路;
依据所述图像序列,得到所述目标对象的第一轨迹;
依据所述第一轨迹和所述目标道路,得到所述目标对象的移动方向。
结合本申请任一实施例,所述图像序列中的图像由所述第一待标定摄像头采集得到;
所述第三处理单元14,配置为:
依据所述目标对象在所述图像序列的像素坐标系下的位置,以及所述图像序列中的图像的采集时间,得到所述目标对象在所述图像序列中的轨迹,作为所述第一轨迹;
依据所述图像序列,确定所述第一轨迹和所述目标道路之间的第一夹角;
依据所述第一夹角和所述目标道路的走向,得到所述目标对象的移动方向。
结合本申请任一实施例,所述图像序列包括第一图像子序列和第二图像子序列,所述第一图像子序列中的图像由所述第一待标定摄像头采集得到,所述第二图像子序列中的图像由第二待标定摄像头采集得到;
所述第三处理单元14,配置为:
依据所述第一图像子序列中的图像的采集时间、所述第二图像子序列中的图像的采集时间、所述第一待标定摄像头的位置和所述第二待标定摄像头的位置,得到所述目标对象在真实世界下的轨迹,作为所述第一轨迹。
结合本申请任一实施例,所述目标道路的走向包括第一方向和第二方向;
所述第三处理单元14,配置为:
在确定所述第一轨迹和所述第一方向匹配的情况下,确定所述移动方向为所述第一方向;
在确定所述第一轨迹和所述第二方向匹配的情况下,确定所述移动方向为所述第二方向。
结合本申请任一实施例,所述第一图像为所述图像序列中时间戳最大的图像。
结合本申请任一实施例,所述图像序列还包括不同于所述第一图像的第二图像,所述第二图像由所述第一待标定摄像头采集得到,所述第三处理单元14还配置为:
确定所述第二图像中的所述目标对象的第三朝向;
依据所述第三朝向和所述移动方向,得到所述第一待标定摄像头的第四朝向;
依据所述第二朝向和所述第四朝向,得到所述第一待标定摄像头的第五朝向。
结合本申请任一实施例,所述第三处理单元14还配置为:
获取所述第二朝向的第一权重和所述第四朝向的第二权重;
依据所述第一权重和所述第二权重,对所述第二朝向和所述第四朝向进行加权平均,得到所述第五朝向。
结合本申请任一实施例,所述第三处理单元14还配置为:
依据图像集中的至少一张图像确定所述第一待标定摄像头的朝向,得到朝向集,所述图像集包括所述图像序列中除所述第一图像和所述第二图像之外的图像;
确定所述朝向集中的第一目标朝向的数量,得到第一数量,所述第一目标朝向的朝向与所述第二朝向的朝向相同;
依据所述第一数量得到所述第一权重,所述第一权重和所述第一数量呈正相关。
结合本申请任一实施例,所述第三处理单元14还配置为:
确定所述第一待标定摄像头的朝向集中的第二目标朝向的数量,得到第二数量,所述第二目标朝向的朝向与所述第四朝向的朝向相同;
依据所述第二数量得到所述第二权重,所述第二权重和所述第二数量呈正相关。
结合本申请任一实施例,所述获取单元11,还配置为:在所述依据所述第一权重和所述第二权重,对所述第二朝向和所述第四朝向进行加权平均,得到所述第五朝向之前,获取朝向与方向角之间的映射关系;
所述第三处理单元14还配置为:
依据所述映射关系确定与所述第二朝向之间具有映射关系的第一角度;
依据所述映射关系确定与所述第四朝向之间具有映射关系的第二角度;
依据所述第一权重和所述第二权重,对所述第一角度和所述第二角度进行加权平均得到第三角度,作为所述第五朝向。
结合本申请任一实施例,所述第三处理单元14还配置为:
将所述第一角度映射为参考圆上的第一点,所述第一角度与第二夹角相同,所述第二夹角为第一向量与直角坐标系的坐标轴之间的夹角,所述第一向量为所述参考圆的圆心指向所述第一点的向量,所述参考圆在所述直角坐标系中;
将所述第二角度映射为参考圆上的第二点,所述第二角度与第三夹角相同,所述第三夹角为第二向量与所述坐标轴之间的夹角,所述第二向量为所述圆心指向所述第二点的向量;
依据所述第一权重和所述第二权重,对所述第一点的坐标和所述第二点的坐标进行加权平均,得到第三点;
确定所述第三向量与所述坐标轴之间的夹角,得到所述第三角度,所述第三向量为所述圆心指向所述第三点的向量。
结合本申请任一实施例,所述第一待标定摄像头包括摄像头。
在一些实施例中,本申请实施例提供的装置具有的功能或包含的模块可以配置为执行上文方法实施例描述的方法,其实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。
图7为本申请实施例提供的一种摄像头朝向标定装置的硬件结构示意图。该摄像头朝向标定装置2包括处理器21,存储器22,输入装置23,输出装置24。该处理器21、存储器22、输入装置23和输出装置24通过连接器相耦合,该连接器包括各类接口、传输线或总线等等,本申请实施例对此不作限定。应当理解,本申请的各个实施例中,耦合是指通过特定方式的相互联系,包括直接相连或者通过其他设备间接相连,例如可以通过各类接口、传输线、总线等相连。
处理器21可以是一个或多个图形处理器(graphics processing unit,GPU),在处理器21是一个GPU的情况下,该GPU可以是单核GPU,也可以是多核GPU。可选的,处理器21可以是多个GPU构成的处理器组,多个处理器之间通过一个或多个总线彼此耦合。可选的,该处理器还可以为其他类型的处理器等等,本申请实施例不作限定。
存储器22可配置为存储计算机程序指令,以及配置为执行本申请实施例方案的程序代码在内的各类计算机程序代码。可选地,存储器包括但不限于是随机存储记忆体(random access memory,RAM)、只读存储器(read-only memory,ROM)、可擦除可编程只读存储器(erasable programmable read only memory,EPROM)、或便携式只读存储器(compact disc read-only memory,CD-ROM),该存储器配置为相关指令及数据。
输入装置23配置为输入以下至少之一:数据、信号,以及输出装置24配置为输出以下至少之一:数据、信号。输入装置23和输出装置24可以是独立的器件,也可以是一个整体的器件。
可理解,本申请实施例中,存储器22不仅可配置为存储相关指令,还可配置为存储相关数据,如该存储器22可配置为存储通过输入装置23获取的图像序列,又或者该存储器22还可配置为存储通过处理器21得到的第二朝向等等,本申请实施例对于该存储器中所存储的数据不作限定。
可以理解的是,图7仅仅示出了一种摄像头朝向标定装置的简化设计。在实际应用中,摄像头朝向标定装置还可以分别包含必要的其他元件,包含但不限于任意数量的输入/输出装置、处理器、存储器等,而所有可以实现本申请实施例的摄像头朝向标定装置都在本申请的保护范围之内。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。所属领域的技术人员还可以清楚地了解到,本申请各个实施例描述各有侧重,为描述的方便和简洁,相同或类似的部分在不同实施例中可能没有赘述,因此,在某一实施例未描述或未详细描述的部分可以参见其他实施例的记载。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,数字通用光盘(digital versatile disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:只读存储器(read-only memory,ROM)或随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可存储程序代码的介质。
工业实用性
本公开实施例公开了一种摄像头朝向标定方法、装置、设备、存储介质及程序,其中,一种摄像头朝向标定方法,该方法由电子设备执行,该方法包括:获取目标对象的图像序列,图像序列包括第一图像;基于图像序列,得到目标对象在图像序列的采集过程中的移动方向;确定第一图像中目标对象的第一朝向;基于第一朝向和移动方向,得到第一待标定摄像头的第二朝向,第一待标定摄像头为采集第一图像的摄像头。

Claims (31)

  1. 一种摄像头朝向标定方法,所述方法由电子设备执行,所述方法包括:
    获取目标对象的图像序列,所述图像序列包括第一图像;
    基于所述图像序列,得到所述目标对象在所述图像序列的采集过程中的移动方向;
    确定所述第一图像中所述目标对象的第一朝向;
    基于所述第一朝向和所述移动方向,得到第一待标定摄像头的第二朝向,所述第一待标定摄像头为采集所述第一图像的摄像头。
  2. 根据权利要求1所述的方法,其中,所述第一朝向包括:所述目标对象的正面朝向所述第一待标定摄像头或所述目标对象的背面朝向所述第一待标定摄像头,所述基于所述第一朝向和所述移动方向,得到所述第一待标定摄像头的第二朝向,包括:
    在所述第一朝向为所述目标对象的正面朝向所述第一待标定摄像头的情况下,确定所述第二朝向与所述移动方向相反;
    在所述第一朝向为所述目标对象的背面朝向所述第一待标定摄像头的情况下,确定所述第二朝向与所述移动方向相同。
  3. 根据权利要求1或2所述的方法,其中,所述基于所述图像序列,得到所述目标对象在所述图像序列的采集过程中的移动方向,包括:
    获取地图数据,所述地图数据包括目标道路,所述目标道路为所述目标对象移动时所在的道路;
    基于所述图像序列,得到所述目标对象的第一轨迹;
    基于所述第一轨迹和所述目标道路,得到所述目标对象的移动方向。
  4. 根据权利要求3所述的方法,其中,所述图像序列中的图像由所述第一待标定摄像头采集得到;
    所述基于所述图像序列,得到所述目标对象的第一轨迹,包括:
    基于所述目标对象在所述图像序列的像素坐标系下的位置,以及所述图像序列中的图像的采集时间,得到所述目标对象在所述图像序列中的轨迹,作为所述第一轨迹;
    所述基于所述第一轨迹和所述目标道路,得到所述目标对象的移动方向,包括:
    基于所述图像序列,确定所述第一轨迹和所述目标道路之间的第一夹角;
    基于所述第一夹角和所述目标道路的走向,得到所述目标对象的移动方向。
  5. 根据权利要求3或4所述的方法,其中,所述图像序列包括第一图像子序列和第二图像子序列,所述第一图像子序列中的图像由所述第一待标定摄像头采集得到,所述第二图像子序列中的图像由第二待标定摄像头采集得到;
    所述基于所述图像序列,得到所述目标对象的第一轨迹,包括:
    基于所述第一图像子序列中的图像的采集时间、所述第二图像子序列中的图像的采集时间、所述第一待标定摄像头的位置和所述第二待标定摄像头的位置,得到所述目标对象在真实世界下的轨迹,作为所述第一轨迹。
  6. 根据权利要求3或4所述的方法,其中,所述目标道路的走向包括第一方向和第二方向;
    所述基于所述第一轨迹和所述目标道路,得到所述目标对象的移动方向,包括:
    在确定所述第一轨迹和所述第一方向匹配的情况下,确定所述移动方向为所述第一方向;
    在确定所述第一轨迹和所述第二方向匹配的情况下,确定所述移动方向为所述第二方向。
  7. 根据权利要求1至6中任意一项所述的方法,其中,所述第一图像为所述图像序列中时间戳最大的图像。
  8. 根据权利要求1至7中任意一项所述的方法,其中,所述图像序列还包括不同于所述第一图像的第二图像,所述第二图像由所述第一待标定摄像头采集得到,所述方法还包括:
    确定所述第二图像中的所述目标对象的第三朝向;
    基于所述第三朝向和所述移动方向,得到所述第一待标定摄像头的第四朝向;
    基于所述第二朝向和所述第四朝向,得到所述第一待标定摄像头的第五朝向。
  9. 根据权利要求8所述的方法,其中,所述基于所述第二朝向和所述第四朝向,得到所述第一待标定摄像头的第五朝向,包括:
    获取所述第二朝向的第一权重和所述第四朝向的第二权重;
    基于所述第一权重和所述第二权重,对所述第二朝向和所述第四朝向进行加权平均,得到所述第五朝向。
  10. 根据权利要求9所述的方法,其中,所述获取所述第二朝向的第一权重,包括:
    基于图像集中的至少一张图像确定所述第一待标定摄像头的朝向,得到朝向集,所述图像集包括所述图像序列中除所述第一图像和所述第二图像之外的图像;
    确定所述朝向集中的第一目标朝向的数量,得到第一数量,所述第一目标朝向的朝向与所述第二朝向的朝向相同;
    基于所述第一数量得到所述第一权重,所述第一权重和所述第一数量呈正相关。
  11. 根据权利要求9或10所述的方法,其中,所述获取所述第四朝向的第二权重,包括:
    确定所述第一待标定摄像头的朝向集中的第二目标朝向的数量,得到第二数量,所述第二目标朝向的朝向与所述第四朝向的朝向相同;
    基于所述第二数量得到所述第二权重,所述第二权重和所述第二数量呈正相关。
  12. 根据权利要求10或11所述的方法,其中,在所述基于所述第一权重和所述第二权重,对所述第二朝向和所述第四朝向进行加权平均,得到所述第五朝向之前,所述方法还包括:
    获取朝向与方向角之间的映射关系;
    基于所述映射关系确定与所述第二朝向之间具有映射关系的第一角度;
    基于所述映射关系确定与所述第四朝向之间具有映射关系的第二角度;
    所述基于所述第一权重和所述第二权重,对所述第二朝向和所述第四朝向进行加权平均,得到所述第五朝向,包括:
    基于所述第一权重和所述第二权重,对所述第一角度和所述第二角度进行加权平均得到第三角度,作为所述第五朝向。
  13. 根据权利要求12所述的方法,其中,所述基于所述第一权重和所述第二权重,对所述第一角度和所述第二角度进行加权平均得到第三角度,包括:
    将所述第一角度映射为参考圆上的第一点,所述第一角度与第二夹角相同,所述第二夹角为第一向量与直角坐标系的坐标轴之间的夹角,所述第一向量为所述参考圆的圆心指向所述第一点的向量,所述参考圆在所述直角坐标系中;
    将所述第二角度映射为参考圆上的第二点,所述第二角度与第三夹角相同,所述第三夹角为第二向量与所述坐标轴之间的夹角,所述第二向量为所述圆心指向所述第二点的向量;
    基于所述第一权重和所述第二权重,对所述第一点的坐标和所述第二点的坐标进行加权平均,得到第三点;
    确定所述第三向量与所述坐标轴之间的夹角,得到所述第三角度,所述第三向量为所述圆心指向所述第三点的向量。
  14. 根据权利要求1至13中任意一项所述的方法,其中,所述第一待标定摄像头包括摄像头。
  15. 一种摄像头朝向标定装置,所述装置包括:
    获取单元,配置为获取目标对象的图像序列,所述图像序列包括第一图像;
    第一处理单元,配置为基于所述图像序列,得到所述目标对象在所述图像序列的采集过程中的移动方向;
    第二处理单元,配置为确定所述第一图像中所述目标对象的第一朝向;
    第三处理单元,配置为基于所述第一朝向和所述移动方向,得到所述第一待标定摄像头的第二朝向,所述第一待标定摄像头为采集所述第一图像的摄像头。
  16. 根据权利要求15所述的标定装置,其中,所述第一朝向包括:所述目标对象的正面朝向所述第一待标定摄像头或所述目标对象的背面朝向所述第一待标定摄像头,所述第三处理单元,配置为:
    在所述第一朝向为所述目标对象的正面朝向所述第一待标定摄像头的情况下,确定所述第二朝向与所述移动方向相反;
    在所述第一朝向为所述目标对象的背面朝向所述第一待标定摄像头的情况下,确定所述第二朝向与所述移动方向相同。
  17. 根据权利要求15或16所述的标定装置,其中,所述第三处理单元,配置为:
    获取地图数据,所述地图数据包括目标道路,所述目标道路为所述目标对象移动时所在的道路;
    基于所述图像序列,得到所述目标对象的第一轨迹;
    基于所述第一轨迹和所述目标道路,得到所述目标对象的移动方向。
  18. 根据权利要求17所述的标定装置,其中,所述图像序列中的图像由所述第一待标定摄像头采集得到;
    所述第三处理单元,配置为:
    基于所述目标对象在所述图像序列的像素坐标系下的位置,以及所述图像序列中的图像的采集时间,得到所述目标对象在所述图像序列中的轨迹,作为所述第一轨迹;
    基于所述图像序列,确定所述第一轨迹和所述目标道路之间的第一夹角;
    基于所述第一夹角和所述目标道路的走向,得到所述目标对象的移动方向。
  19. 根据权利要求17或18所述的标定装置,其中,所述图像序列包括第一图像子序列和第二图像子序列,所述第一图像子序列中的图像由所述第一待标定摄像头采集得到,所述第二图像子序列中的图像由第二待标定摄像头采集得到;
    所述第三处理单元,配置为:
    基于所述第一图像子序列中的图像的采集时间、所述第二图像子序列中的图像的采集时间、所述第一待标定摄像头的位置和所述第二待标定摄像头的位置,得到所述目标对象在真实世界下的轨迹,作为所述第一轨迹。
  20. 根据权利要求17或18所述的标定装置,其中,所述目标道路的走向包括第一方向和第二方向;所述第三处理单元,配置为:
    在确定所述第一轨迹和所述第一方向匹配的情况下,确定所述移动方向为所述第一方向;
    在确定所述第一轨迹和所述第二方向匹配的情况下,确定所述移动方向为所述第二方向。
  21. 根据权利要求15至20中任意一项所述的标定装置,其中,所述第一图像为所述图像序列中时间戳最大的图像。
  22. 根据权利要求15至21中任意一项所述的标定装置,其中,所述图像序列还包括不同于所述第一图像的第二图像,所述第二图像由所述第一待标定摄像头采集得到,所述第三处理单元还配置为:
    确定所述第二图像中的所述目标对象的第三朝向;
    基于所述第三朝向和所述移动方向,得到所述第一待标定摄像头的第四朝向;
    基于所述第二朝向和所述第四朝向,得到所述第一待标定摄像头的第五朝向。
  23. 根据权利要求22所述的标定装置,其中,所述第三处理单元还配置为:
    获取所述第二朝向的第一权重和所述第四朝向的第二权重;
    基于所述第一权重和所述第二权重,对所述第二朝向和所述第四朝向进行加权平均,得到所述第五朝向。
  24. 根据权利要求23所述的标定装置,其中,所述第三处理单元还配置为:
    基于图像集中的至少一张图像确定所述第一待标定摄像头的朝向,得到朝向集,所述图像集包括所述图像序列中除所述第一图像和所述第二图像之外的图像;
    确定所述朝向集中的第一目标朝向的数量,得到第一数量,所述第一目标朝向的朝向与所述第二朝向的朝向相同;
    基于所述第一数量得到所述第一权重,所述第一权重和所述第一数量呈正相关。
  25. 根据权利要求23或24所述的标定装置,其中,所述第三处理单元还配置为:
    确定所述第一待标定摄像头的朝向集中的第二目标朝向的数量,得到第二数量,所述第二目标朝向的朝向与所述第四朝向的朝向相同;
    基于所述第二数量得到所述第二权重,所述第二权重和所述第二数量呈正相关。
  26. 根据权利要求24或25所述的标定装置,其中,所述获取单元,还配置为:在所述基于所述第一权重和所述第二权重,对所述第二朝向和所述第四朝向进行加权平均,得到所述第五朝向之前,获取朝向与方向角之间的映射关系;
    所述第三处理单元还配置为:
    基于所述映射关系确定与所述第二朝向之间具有映射关系的第一角度;
    基于所述映射关系确定与所述第四朝向之间具有映射关系的第二角度;
    基于所述第一权重和所述第二权重,对所述第一角度和所述第二角度进行加权平均得到第三角度,作为所述第五朝向。
  27. 根据权利要求26所述的标定装置,其中,所述第三处理单元还配置为:
    将所述第一角度映射为参考圆上的第一点,所述第一角度与第二夹角相同,所述第二夹角为第一向量与直角坐标系的坐标轴之间的夹角,所述第一向量为所述参考圆的圆心指向所述第一点的向量,所述参考圆在所述直角坐标系中;
    将所述第二角度映射为参考圆上的第二点,所述第二角度与第三夹角相同,所述第三夹角为第二向量与所述坐标轴之间的夹角,所述第二向量为所述圆心指向所述第二点的向量;
    基于所述第一权重和所述第二权重,对所述第一点的坐标和所述第二点的坐标进行加权平均,得到第三点;
    确定所述第三向量与所述坐标轴之间的夹角,得到所述第三角度,所述第三向量为所述圆心指向所述第三点的向量。
  28. 根据权利要求15至27中任意一项所述的标定装置,其中,所述第一待标定摄像头包括摄像头。
  29. 一种电子设备,包括:处理器和存储器,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,在所述处理器执行所述计算机指令的情况下,所述电子设备执行如权利要求1至14中任意一项所述的方法。
  30. 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序包括程序指令,在所述程序指令被处理器执行的情况下,使所述处理器执行权利要求1至14中任意一项所述的方法。
  31. 一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在计算机上运行的情况下,使得所述计算机执行权利要求1至14中任意一项所述的方法。
PCT/CN2021/102931 2021-03-25 2021-06-29 摄像头朝向标定方法、装置、设备、存储介质及程序 WO2022198822A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110319369.9A CN112950726B (zh) 2021-03-25 2021-03-25 摄像头朝向标定方法及相关产品
CN202110319369.9 2021-03-25

Publications (1)

Publication Number Publication Date
WO2022198822A1 true WO2022198822A1 (zh) 2022-09-29

Family

ID=76226791

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/102931 WO2022198822A1 (zh) 2021-03-25 2021-06-29 摄像头朝向标定方法、装置、设备、存储介质及程序

Country Status (2)

Country Link
CN (1) CN112950726B (zh)
WO (1) WO2022198822A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950726B (zh) * 2021-03-25 2022-11-11 深圳市商汤科技有限公司 摄像头朝向标定方法及相关产品

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100295948A1 (en) * 2009-05-21 2010-11-25 Vimicro Corporation Method and device for camera calibration
CN110189379A (zh) * 2019-05-28 2019-08-30 广州小鹏汽车科技有限公司 一种相机外部参数的标定方法及系统
CN112308931A (zh) * 2020-11-02 2021-02-02 深圳市泰沃德技术有限公司 相机的标定方法、装置、计算机设备和存储介质
CN112950726A (zh) * 2021-03-25 2021-06-11 深圳市商汤科技有限公司 摄像头朝向标定方法及相关产品

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101118648A (zh) * 2007-05-22 2008-02-06 南京大学 交通监视环境下的路况摄像机标定方法
US9286678B2 (en) * 2011-12-28 2016-03-15 Pelco, Inc. Camera calibration using feature identification
WO2017130639A1 (ja) * 2016-01-28 2017-08-03 株式会社リコー 画像処理装置、撮像装置、移動体機器制御システム、画像処理方法、及びプログラム
CN108694882B (zh) * 2017-04-11 2020-09-22 百度在线网络技术(北京)有限公司 用于标注地图的方法、装置和设备
CN107220632B (zh) * 2017-06-12 2020-02-18 山东大学 一种基于法向特征的路面图像分割方法
JP6985593B2 (ja) * 2017-10-18 2021-12-22 富士通株式会社 画像処理プログラム、画像処理装置および画像処理方法
CN109886078B (zh) * 2018-12-29 2022-02-18 华为技术有限公司 目标对象的检索定位方法和装置
CN112446920A (zh) * 2019-09-05 2021-03-05 华为技术有限公司 一种摄像机位置的确定方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100295948A1 (en) * 2009-05-21 2010-11-25 Vimicro Corporation Method and device for camera calibration
CN110189379A (zh) * 2019-05-28 2019-08-30 广州小鹏汽车科技有限公司 一种相机外部参数的标定方法及系统
CN112308931A (zh) * 2020-11-02 2021-02-02 深圳市泰沃德技术有限公司 相机的标定方法、装置、计算机设备和存储介质
CN112950726A (zh) * 2021-03-25 2021-06-11 深圳市商汤科技有限公司 摄像头朝向标定方法及相关产品

Also Published As

Publication number Publication date
CN112950726A (zh) 2021-06-11
CN112950726B (zh) 2022-11-11

Similar Documents

Publication Publication Date Title
CN110457414B (zh) 离线地图处理、虚拟对象显示方法、装置、介质和设备
WO2019170166A1 (zh) 深度相机标定方法以及装置、电子设备及存储介质
CN112528831B (zh) 多目标姿态估计方法、多目标姿态估计装置及终端设备
WO2021134960A1 (zh) 标定方法及装置、处理器、电子设备、存储介质
WO2022095596A1 (zh) 图像对齐方法、图像对齐装置及终端设备
CN112288853B (zh) 三维重建方法、三维重建装置、存储介质
WO2022237811A1 (zh) 图像处理方法、装置及设备
CN111784776B (zh) 视觉定位方法及装置、计算机可读介质和电子设备
WO2021136386A1 (zh) 数据处理方法、终端和服务器
WO2021112382A1 (en) Apparatus and method for dynamic multi-camera rectification using depth camera
TWI779801B (zh) 測溫方法、電子設備及電腦可讀儲存介質
CN111582240B (zh) 一种对象数量的识别方法、装置、设备和介质
WO2021101097A1 (en) Multi-task fusion neural network architecture
WO2022198822A1 (zh) 摄像头朝向标定方法、装置、设备、存储介质及程序
WO2022247126A1 (zh) 视觉定位方法、装置、设备、介质及程序
WO2019218900A9 (zh) 一种神经网络模型、数据处理方法及处理装置
CN112288878B (zh) 增强现实预览方法及预览装置、电子设备及存储介质
CN113052915A (zh) 相机外参标定方法、装置、增强现实系统、终端设备及存储介质
WO2023093407A1 (zh) 标定方法及装置、电子设备及计算机可读存储介质
CN116778222A (zh) 一种基于人体姿态的脊柱侧弯检测的方法、系统、存储介质及电子设备
CA3142001C (en) Spherical image based registration and self-localization for onsite and offsite viewing
CN109801300A (zh) 棋盘格角点的坐标提取方法、装置、设备及计算机可读存储介质
TWI739601B (zh) 圖像處理方法、電子設備和儲存介質
CN111223139A (zh) 目标定位方法及终端设备
CN112446928B (zh) 一种拍摄装置的外参确定系统和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21932438

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.01.2024)