WO2022194145A1 - 一种拍摄位置确定方法、装置、设备及介质 - Google Patents

一种拍摄位置确定方法、装置、设备及介质 Download PDF

Info

Publication number
WO2022194145A1
WO2022194145A1 PCT/CN2022/080916 CN2022080916W WO2022194145A1 WO 2022194145 A1 WO2022194145 A1 WO 2022194145A1 CN 2022080916 W CN2022080916 W CN 2022080916W WO 2022194145 A1 WO2022194145 A1 WO 2022194145A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
information
position information
image
shooting position
Prior art date
Application number
PCT/CN2022/080916
Other languages
English (en)
French (fr)
Inventor
郭亨凯
杜思聪
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2022194145A1 publication Critical patent/WO2022194145A1/zh
Priority to US18/468,647 priority Critical patent/US20240005552A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to a method, apparatus, device, and medium for determining a shooting position.
  • the present disclosure provides a method, apparatus, device and medium for determining a shooting position.
  • An embodiment of the present disclosure provides a method for determining a shooting position, and the method includes:
  • the attribute information includes position information and size information
  • the target area is the area where the target-shaped object is located in the target image
  • the shooting position information of the target image is determined, wherein the shooting position information is the position information of the shooting position relative to the world coordinate system.
  • Embodiments of the present disclosure also provide a device for determining a shooting position, the device comprising:
  • the image information module is used to determine the attribute information of the target area in the target image, wherein the attribute information includes position information and size information, and the target area is the area where the target shape object is located in the target image;
  • the shooting position module is configured to determine the shooting position information of the target image according to the attribute information of the target area and the camera projection model, wherein the shooting position information is the position information of the shooting position relative to the world coordinate system.
  • An embodiment of the present disclosure further provides an electronic device, the electronic device includes: a processor; a memory for storing instructions executable by the processor; the processor for reading the memory from the memory The instructions can be executed, and the instructions can be executed to implement the method for determining the shooting position provided by the embodiments of the present disclosure.
  • An embodiment of the present disclosure further provides a computer-readable storage medium, where the storage medium stores a computer program, and the computer program is used to execute the method for determining a shooting position provided by the embodiment of the present disclosure.
  • Embodiments of the present disclosure also provide a computer program product, including computer programs/instructions, when the computer program/instructions are executed by a processor, the method for determining a shooting position provided by the embodiments of the present disclosure is implemented.
  • the technical solution provided by the embodiment of the present disclosure has the following advantages: the solution for determining the shooting position provided by the embodiment of the present disclosure determines the attribute information of the target area in the target image, wherein the attribute information includes position information and size information,
  • the target area is the area where the target shape object is located in the target image; according to the attribute information of the target area and the camera projection model, the shooting position information of the target image is determined, wherein the shooting position information is the position information of the shooting position relative to the world coordinate system.
  • the location and size of the region where the object with fixed shape is located and the camera projection model in one image can determine the shooting position.
  • two-dimensional The information can efficiently locate the shooting position and improve the calculation efficiency.
  • FIG. 1 is a schematic flowchart of a method for determining a shooting position according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of another method for determining a shooting position according to an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a device for determining a shooting position according to an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • FIG. 1 is a schematic flowchart of a method for determining a shooting position provided by an embodiment of the present disclosure.
  • the method may be executed by a device for determining a shooting position, where the apparatus may be implemented by software and/or hardware, and may generally be integrated in an electronic device.
  • the method includes:
  • Step 101 Determine attribute information of a target area in the target image, wherein the attribute information includes position information and size information, and the target area is the area where the target shape object is located in the target image.
  • the target image may be any image captured by a capturing device whose position needs to be determined, may be an image captured in real time, or may be any image frame in a video captured in real time, which is not particularly limited.
  • the target area may be the area where the object of the target shape is located in the target image, that is, the area with the target shape.
  • the target shape refers to a shape that can be represented by an equation.
  • the target shape may include an ellipse, a circle, etc.
  • the present disclosure implements
  • the target area is taken as an elliptical area in the target image for illustration.
  • the location information may be information that can represent the location of the target area in the target image, and may specifically include information such as vertex coordinates, center point coordinates, and the size of the target area in the target image.
  • the size information refers to size information of the target area. For example, when the target area is an oval area, the attribute information may include the coordinates of the center point of the oval area, the size of the major and minor axes, and the like.
  • any detection method may be used to determine the position information and size information of the target area in the target image, for example, a preset detection algorithm or a feature point tracking algorithm may be used to determine the position information.
  • Step 102 Determine the shooting position information of the target image according to the attribute information of the target area and the camera projection model, wherein the shooting position information is the position information of the shooting position relative to the world coordinate system.
  • the shooting position information may be the position information of the position of the shooting device that shoots the target image relative to the world coordinate system
  • the camera projection model may be the pinhole projection model of the shooting device.
  • the photographing device in the embodiment of the present disclosure may be a device with an image acquisition function, may be a separate photographing location, or may be a photographing module on a terminal device, such as a photographing module on a mobile phone.
  • determining the shooting position information of the target image according to the attribute information of the target area and the camera projection model includes: inputting the attribute information of the target area into the projection equation of the camera projection model, and determining the shooting position in the world coordinate system to The displacement information of the target-shaped object; according to the displacement information from the shooting position to the target-shaped object in the world coordinate system and the position information of the target-shaped object in the world coordinate system, the position is solved to obtain the shooting position information.
  • the position information of the target shape object in the world coordinate system can be a preset fixed value, which is a known quantity.
  • the origin of the world coordinate system can be set at the position where the target shape object is located, and the coordinates of the position information are (0, 0, 0).
  • the shooting position information can be determined.
  • W10, W20 and W12 are all vectors in the world coordinate system, with direction and size
  • W10 is the vector from the shooting position in the world coordinate system to the origin of the world coordinate system
  • W20 is the target shape object in the world coordinate system.
  • Position to the world The vector of the origin of the coordinate system
  • W12 is the vector from the shooting position to the target shape object in the world coordinate system.
  • the two vectors W20 and W12 are added using the triangle rule, and the two vectors are connected end to end in turn.
  • the result W10 is the starting point of the first vector and the end point of the last vector.
  • the above W12 is an unknown quantity, which can be calculated by inputting the position information and size information of the target area into the projection equation of the camera projection model.
  • the location information of the target area in the projection equation is related to the size information of the target area, the internal parameters of the photographing device, the rotation matrix from the coordinate system where the photographing location is located to the world coordinate system, and the location information of the origin of the world coordinate system under the coordinate system where the photographing location is located.
  • R12 represents the rotation matrix from the coordinate system where the shooting position is located to the world coordinate system
  • p represents the position information of the target area
  • T represents the position information of the origin of the world coordinate system in the coordinate system where the shooting position is located.
  • R23 represents the rotation matrix from the world coordinate system to the target shape object coordinate system
  • the origin of the coordinate system where the shooting position is located is the shooting position
  • the origin of the target shape object coordinate system is the target shape object.
  • the target image is an image frame in a video and the shooting position needs to be determined for each image frame in the video
  • you can use The Kalman filter algorithm performs smooth operation to avoid jumps in the shooting position and improve the accuracy of determining the shooting position information.
  • the shooting position determination solution determines the attribute information of the target area in the target image, wherein the attribute information includes position information and size information, and the target area is the area where the target shape object is located in the target image; according to the attribute information of the target area and a camera projection model to determine the shooting position information of the target image, wherein the shooting position information is the position information of the shooting position relative to the world coordinate system.
  • the location and size of the area where the object with fixed shape is located in one image and the camera projection model can be used to determine the shooting position. The information can efficiently locate the shooting position and improve the calculation efficiency.
  • determining the position information of the target area in the target image includes: extracting a first image in the target video, and determining the first position information of the target area in the first image; an initial position determined according to the first position information
  • the feature points perform optical flow tracking on the target image to obtain the target feature points; wherein, the target image is the adjacent video frame of the first image in the target video; the target feature points are fitted to obtain the position information of the target area in the target image .
  • the target video can be a video including a target image, any video that needs to be detected and tracked, a video captured by a device with a video capture function, or a video obtained from the Internet or other devices , the specifics are not limited.
  • the target image may be any image frame in the target video, and the first image may be the last image frame in the target video adjacent to the target image in time sequence.
  • the first position information refers to the position information of the target area in the first image, and may include information such as vertex coordinates, center point coordinates, and the like.
  • a preset detection algorithm is used to detect the target area on the first image, and the first position information of the target area in the first image is determined.
  • the above-mentioned preset detection algorithm may be a deep learning-based detection algorithm or a contour detection algorithm, etc., which may be determined according to the actual situation.
  • the preset detection algorithm may be any ellipse detection algorithm, and an ellipse detection algorithm is used.
  • the detection algorithm performs contour detection on the first image, and then fits the elliptical contour obtained by the contour detection to obtain the first position information of the target area in the first image.
  • Determining the initial feature point according to the first position information includes: sampling the edge contour of the target area in the first image according to the first position information to determine the initial feature point.
  • sampling the edge contour of the target area in the first image according to the first position information, and determining the initial feature points including: when the target area is an elliptical area, according to the first position information, the target area is in polar coordinates.
  • the initial feature points obtained by the above sampling are tracked by the optical flow tracking algorithm, and the feature points that are successfully tracked are reserved as the target feature points, and the feature points that fail to be tracked are eliminated. Fit the target feature points to obtain the position information of the target area in the target image.
  • fitting the target feature points to obtain position information of the target area in the target image including: if the coverage of the target feature points on the edge contour of the target area is greater than or equal to a preset range, then The target feature points are fitted to obtain the position information of the target area in the target image.
  • the preset range refers to a preset range that satisfies the shape of the target area, which may be set according to actual conditions. For example, the preset range may be 3/4 of the entire range of the edge contour.
  • the target feature points after determining the target feature points, it can be determined whether the coverage of the target points on the edge contour of the target area is greater than or equal to the preset range, and if so, use a fitting algorithm to fit the target feature points to obtain the target area in location information in the target image. If the coverage area of the target feature points on the edge contour of the target area is smaller than the preset area, a preset detection algorithm can be directly used to detect the target image to determine the position information of the target area in the target image.
  • the method further includes: determining a change parameter of the target image relative to the first image; and determining the initial feature points determined according to the first position information to the target image
  • Performing optical flow tracking to obtain target feature points includes: if it is determined based on the change parameter that the target does not meet the multiplexing condition, performing optical flow tracking on the target image based on the initial feature points determined according to the first position information to obtain the target feature points.
  • the transformation parameter refers to a parameter representing the change of the target image relative to the first image.
  • determining a change parameter of the target image relative to the first image may include: extracting a first feature point in the first image; performing optical flow tracking on the target image according to the first feature point, determining a second feature point, and The moving distance between the second feature point and the first feature point is determined as a change parameter.
  • the first feature point may be a corner point detected on the first image by using the FAST corner point detection algorithm.
  • the multiplexing condition refers to a specific judging condition for determining whether the first image can be multiplexed by the target image to the position of the target area.
  • the change threshold refers to a preset threshold, which can be set according to the actual situation. For example, when the change parameter is represented by the movement information of the feature points in the target image relative to the corresponding feature points in the first video, the transformation threshold can be set as the distance threshold. is 0.8.
  • the change parameter can be compared with the change threshold. If it is determined that the change parameter is greater than the change threshold, it can be determined that the target image does not meet the multiplexing condition and needs to be tracked again. Perform optical flow tracking on the target image based on the initial feature points determined according to the first position information to obtain target feature points; otherwise, it is determined that the target image satisfies the multiplexing condition, then the first position information is determined as the position information of the target area in the target image .
  • the position of the target area in the target image can be more accurately determined through feature point tracking and fitting, which improves the accuracy of the target area in the target image.
  • Computational efficiency of location determination by adding multiplexing conditions to two adjacent video frames, when the changes of the two adjacent video frames in the video are large, the above-mentioned feature point tracking and fitting are used to determine the position of the target area; When the change or difference between two adjacent video frames is small, the similarity between the two video frames is high. At this time, the next video frame can directly reuse the position information of the target area of the previous video frame without re-detection. The workload is saved and the computing efficiency is improved.
  • FIG. 2 is a schematic flowchart of another method for determining a shooting position provided by an embodiment of the present disclosure. On the basis of the above-mentioned embodiment, this embodiment further optimizes the above-mentioned method for determining a shooting position. As shown in Figure 2, the method includes:
  • Step 201 Determine the attribute information of the target area in the target image.
  • the attribute information includes position information and size information, and the target area is the area where the target shape object is located in the target image.
  • determining the position information of the target area in the target image includes: extracting a first image in the target video, and determining the first position information of the target area in the first image; initial feature points determined according to the first position information Optical flow tracking is performed on the target image to obtain target feature points; wherein, the target image is an adjacent video frame of the first image in the target video; the target feature points are fitted to obtain the position information of the target area in the target image.
  • fitting the target feature points to obtain position information of the target area in the target image including: if the coverage of the target feature points on the edge contour of the target area is greater than or equal to a preset Points are fitted to obtain the position information of the target area in the target image.
  • the method further includes: determining a change parameter of the target image relative to the first image; performing light on the target image according to the initial feature points determined by the first position information.
  • the flow tracking to obtain the target feature points includes: if it is determined based on the change parameter that the target does not meet the multiplexing condition, performing optical flow tracking on the target image with the initial feature points determined according to the first position information to obtain the target feature points.
  • Step 202 Input the attribute information of the target area into the projection equation of the camera projection model, and determine the displacement information from the shooting position to the target-shaped object in the world coordinate system.
  • the shooting position information is position information of the shooting position relative to the world coordinate system.
  • the displacement information from the shooting position to the target-shaped object in the world coordinate system is related to the position information of the origin of the world coordinate system under the coordinate system where the shooting position is located.
  • Step 203 Perform position solution according to the displacement information from the shooting position to the target-shaped object in the world coordinate system and the position information of the target-shaped object in the world coordinate system to obtain the shooting position information.
  • the specific process may include: 1. Using any ellipse detection algorithm to obtain the center point coordinates and size (major and short axes) of the ellipse region in the target image. 2. Determine the shooting position according to the pinhole projection model (ie, the camera projection model) and the position and size of the ellipse.
  • the pinhole projection model ie, the camera projection model
  • the rotation matrix of the coordinate system p represents the coordinate of the center point of the elliptical area, T represents the vector from the origin of the world coordinate system under the coordinate system where the shooting position is located to the origin of the coordinate system where the shooting position is located, and the origin of the shooting position coordinate system is the shooting position.
  • W12 After determining W12, W12 and W20 can be input
  • the shooting position W10 under the world coordinate system is obtained.
  • W10 represents the shooting position information under the world coordinate system
  • W20 represents the center point of the target shape object under the world coordinate system
  • W12 represents the displacement information from the shooting position to the target area in the world coordinate system.
  • the efficiency is low.
  • the shooting position can be determined only by detecting the area where the target-shaped object is located in an image and the camera projection model. The fact that the shape can be characterized by an equation provides more constraints and improves the efficiency of the determination.
  • the shooting position determination solution determines the attribute information of the target area in the target image, wherein the attribute information includes position information and size information, and the target area is the area where the target shape object is located in the target image; according to the attribute information of the target area and a camera projection model to determine the shooting position information of the target image, wherein the shooting position information is the position information of the shooting position relative to the world coordinate system.
  • the location and size of the region where the object with fixed shape is located and the camera projection model in one image can determine the shooting position.
  • two-dimensional The information can efficiently locate the shooting position and improve the calculation efficiency.
  • FIG. 3 is a schematic structural diagram of an apparatus for determining a photographing position according to an embodiment of the present disclosure.
  • the apparatus may be implemented by software and/or hardware, and may generally be integrated into an electronic device.
  • the device includes:
  • the image information module 301 is used to determine the attribute information of the target area in the target image, wherein the attribute information includes position information and size information, and the target area is the area where the target shape object is located in the target image;
  • the shooting position module 302 is configured to determine the shooting position information of the target image according to the attribute information of the target area and the camera projection model, wherein the shooting position information is the position information of the shooting position relative to the world coordinate system.
  • the shooting position module 302 is used for:
  • the position solution is performed to obtain the shooting position information.
  • the position information of the target area and the size information of the target area in the projection equation, the internal parameters of the photographing device, the rotation matrix from the coordinate system where the photographing position is located to the world coordinate system, the The position information of the origin of the world coordinate system in the coordinate system where the shooting position is located is related.
  • the displacement information from the shooting position to the target-shaped object under the world coordinate system is related to the position information of the origin of the world coordinate system under the coordinate system where the shooting position is located.
  • the image information module 301 is used for:
  • the target image is an adjacent video frame of the first image in the target video
  • Fitting the target feature points to obtain position information of the target area in the target image is
  • the image information module 301 is used for:
  • the coverage range of the target feature point on the edge contour of the target area is greater than or equal to a preset range, perform fitting on the target feature point to obtain the position information of the target area in the target image .
  • the image information module 301 is used for:
  • the initial feature point determined according to the first position information performs optical flow tracking on the target image to obtain target feature points.
  • the apparatus for determining a photographing position provided by the embodiment of the present disclosure can execute the method for determining a photographing position provided by any embodiment of the present disclosure, and has corresponding functional modules and beneficial effects for executing the method.
  • An embodiment of the present disclosure further provides a computer program product, including a computer program/instruction, when the computer program/instruction is executed by a processor, the method for determining a shooting position provided by any embodiment of the present disclosure is implemented.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • FIG. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. Referring specifically to FIG. 4 below, it shows a schematic structural diagram of an electronic device 400 suitable for implementing an embodiment of the present disclosure.
  • the electronic device 400 in the embodiment of the present disclosure may include, but is not limited to, such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal ( For example, mobile terminals such as car navigation terminals) and the like, and stationary terminals such as digital TVs, desktop computers, and the like.
  • the electronic device shown in FIG. 4 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • an electronic device 400 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 401 that may be loaded into random access according to a program stored in a read only memory (ROM) 402 or from a storage device 408 Various appropriate actions and processes are executed by the programs in the memory (RAM) 403 . In the RAM 403, various programs and data required for the operation of the electronic device 400 are also stored.
  • the processing device 401, the ROM 402, and the RAM 403 are connected to each other through a bus 404.
  • An input/output (I/O) interface 405 is also connected to bus 404 .
  • I/O interface 405 the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 407 of a computer, etc.; a storage device 408 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 409. Communication means 409 may allow electronic device 400 to communicate wirelessly or by wire with other devices to exchange data.
  • FIG. 4 shows electronic device 400 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 409, or from the storage device 408, or from the ROM 402.
  • the processing device 401 When the computer program is executed by the processing device 401, the above-mentioned functions defined in the shooting position determination method of the embodiment of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the client and server can use any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects.
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, make the electronic device: determine the attribute information of the target area in the target image, wherein the attribute information includes the position information and size information, the target area is the area where the target shape object is located in the target image; according to the attribute information of the target area and the camera projection model, the shooting position information of the target image is determined, wherein the shooting position The information is the position information of the shooting position relative to the world coordinate system.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation of the unit itself under certain circumstances.
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLDs Complex Programmable Logical Devices
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the present disclosure provides a method for determining a shooting position, including:
  • the attribute information includes position information and size information
  • the target area is the area where the target-shaped object is located in the target image
  • the shooting position information of the target image is determined, wherein the shooting position information is the position information of the shooting position relative to the world coordinate system.
  • determining the shooting position information of the target image according to the attribute information of the target area and the camera projection model including:
  • the position solution is performed to obtain the shooting position information.
  • the position information of the target area and the size information of the target area in the projection equation, the internal parameters of the photographing device, the The rotation matrix from the coordinate system where the shooting position is located to the world coordinate system is related to the position information of the origin of the world coordinate system under the coordinate system where the shooting position is located.
  • the displacement information from the shooting position to the target-shaped object in the world coordinate system is the same as the position in the coordinate system where the shooting position is located. It is related to the position information of the origin of the world coordinate system.
  • determining the position information of the target area in the target image includes:
  • the target image is an adjacent video frame of the first image in the target video
  • Fitting the target feature points to obtain position information of the target area in the target image is
  • the target feature points are fitted to obtain the position information of the target area in the target image, including:
  • the coverage range of the target feature point on the edge contour of the target area is greater than or equal to a preset range, perform fitting on the target feature point to obtain the position information of the target area in the target image .
  • the method further includes:
  • the initial feature point determined according to the first position information performs optical flow tracking on the target image to obtain target feature points.
  • the present disclosure provides an apparatus for determining a photographing position, including:
  • the image information module is used to determine the attribute information of the target area in the target image, wherein the attribute information includes position information and size information, and the target area is the area where the target shape object is located in the target image;
  • the shooting position module is configured to determine the shooting position information of the target image according to the attribute information of the target area and the camera projection model, wherein the shooting position information is the position information of the shooting position relative to the world coordinate system.
  • the shooting position module is configured to:
  • the position solution is performed to obtain the shooting position information.
  • the apparatus for determining the photographing position provided by the present disclosure, the position information of the target area and the size information of the target area in the projection equation, the internal parameters of the photographing apparatus, the The rotation matrix from the coordinate system where the shooting position is located to the world coordinate system is related to the position information of the origin of the world coordinate system under the coordinate system where the shooting position is located.
  • the displacement information from the photographing position to the target-shaped object in the world coordinate system is the same as that in the coordinate system where the photographing position is located. It is related to the position information of the origin of the world coordinate system.
  • the image information module is used for:
  • the target image is an adjacent video frame of the first image in the target video
  • Fitting the target feature points to obtain position information of the target area in the target image is
  • the image information module is used for:
  • the coverage range of the target feature point on the edge contour of the target area is greater than or equal to a preset range, perform fitting on the target feature point to obtain the position information of the target area in the target image .
  • the image information module is used for:
  • the initial feature point determined according to the first position information performs optical flow tracking on the target image to obtain target feature points.
  • the present disclosure provides an electronic device, comprising:
  • a memory for storing the processor-executable instructions
  • the processor is configured to read the executable instructions from the memory, and execute the instructions to implement the method for determining a shooting position as provided in any one of the present disclosure.
  • the present disclosure provides a computer-readable storage medium, where the storage medium stores a computer program, and the computer program is used to execute any one of the photography provided by the present disclosure. Location determination method.
  • the present disclosure provides a computer program product, including computer programs/instructions, which, when executed by a processor, realize the photographing position as provided in any one of the present disclosures Determine the method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Algebra (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

本公开实施例涉及一种拍摄位置确定方法、装置、设备及介质,其中该方法包括:确定目标图像中目标区域的属性信息,其中,属性信息包括位置信息和尺寸信息,目标区域为目标图像中目标形状物体所在区域;根据目标区域的属性信息以及相机投影模型,确定目标图像的拍摄位置信息,其中,拍摄位置信息为拍摄位置相对于世界坐标系的位置信息。采用上述技术方案,通过一个图像中固定形状物体所在区域的位置、尺寸以及相机投影模型,可以实现拍摄位置的确定,相较于目前仅利用单维度信息需要多张图像的方案,采用两个维度的信息可以高效定位拍摄位置,提高了计算效率。

Description

一种拍摄位置确定方法、装置、设备及介质
相关申请的交叉引用
本申请要求于2021年03月15日提交的,申请号为202110277835.1、发明名称为“一种拍摄位置确定方法、装置、设备及介质”的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种拍摄位置确定方法、装置、设备及介质。
背景技术
随着图像处理技术的不断发展,基于图像的终端产品日益增加。
目前,基于图像确定拍摄位置时,存在计算效率较低,速度慢的缺陷。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种拍摄位置确定方法、装置、设备及介质。
本公开实施例提供了一种拍摄位置确定方法,所述方法包括:
确定目标图像中目标区域的属性信息,其中,所述属性信息包括位置信息和尺寸信息,所述目标区域为所述目标图像中目标形状物体所在区域;
根据所述目标区域的属性信息以及相机投影模型,确定所述目标图像的拍摄位置信息,其中,所述拍摄位置信息为拍摄位置相对于世界坐标系的位置信息。
本公开实施例还提供了一种拍摄位置确定装置,所述装置包括:
图像信息模块,用于确定目标图像中目标区域的属性信息,其中,所述属性信息包括位置信息和尺寸信息,所述目标区域为所述目标图像中目标形状物体所在区域;
拍摄位置模块,用于根据所述目标区域的属性信息以及相机投影模型,确定所述目标图像的拍摄位置信息,其中,所述拍摄位置信息为拍摄位置相对于世界坐标系的位置信息。
本公开实施例还提供了一种电子设备,所述电子设备包括:处理器;用于存储所述处理器可执行指令的存储器;所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开实施例提供的拍摄位置确定方法。
本公开实施例还提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开实施例提供的拍摄位置确定方法。
本公开实施例还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现如本公开实施例提供的拍摄位置确定方法。
本公开实施例提供的技术方案与现有技术相比具有如下优点:本公开实施例提供的拍摄位置确定方案,确定目标图像中目标区域的属性信息,其中,属性信息包括位置信息和尺寸信息,目标区域为目标图像中目标形状物体所在区域;根据目标区域的属性信息以及相机投影模型,确定目标图像的拍摄位置信息,其中,拍摄位置信息为拍摄位置相对于世界坐标系的位置信息。采用上述技术方案,通过一个图像中固定形状物体所在区域的位置、尺寸以及相机投影模型,可以实现拍摄位置的确定,相较于目前仅利用单维度信息需要多张图像的方案,采用两个维度的信息可以高效定位拍摄位置,提高了计算效率。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例提供的一种拍摄位置确定方法的流程示意图;
图2为本公开实施例提供的另一种拍摄位置确定方法的流程示意图;
图3为本公开实施例提供的一种拍摄位置确定装置的结构示意图;
图4为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单 元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
图1为本公开实施例提供的一种拍摄位置确定方法的流程示意图,该方法可以由拍摄位置确定装置执行,其中该装置可以采用软件和/或硬件实现,一般可集成在电子设备中。如图1所示,该方法包括:
步骤101、确定目标图像中目标区域的属性信息,其中,属性信息包括位置信息和尺寸信息,目标区域为目标图像中目标形状物体所在区域。
其中,目标图像可以为任意一个由需要确定位置的拍摄装置拍摄得到的图像,可以为实时拍摄的图像,也可以为实时拍摄的视频中的任意一个图像帧,具体不限。目标区域可以为目标图像中目标形状的物体所在的区域,也即具有目标形状的区域,目标形状是指能够采用一个方程表征的形状,例如目标形状可以包括椭圆形和圆形等,本公开实施例以目标区域为目标图像中椭圆形区域为例进行说明。
位置信息可以为能够表征目标区域在目标图像中的位置的信息,具体可以包括目标区域在目标图像中的顶点坐标、中心点坐标以及目标区域的大小等信息。尺寸信息是指目标区域的大小信息。例如当目标区域为椭圆形区域时,属性信息可以包括椭圆形区域的中心点坐标以及长短轴尺寸等。
本公开实施例中,获取目标图像之后,可以采用任意一种检测方式确定目标图像中目标区域的位置信息和尺寸信息,例如可以采用预设检测算法或特征点跟踪算法确定位置信息。
步骤102、根据目标区域的属性信息以及相机投影模型,确定目标图像的拍摄位置信息,其中,拍摄位置信息为拍摄位置相对于世界坐标系的位置信息。
拍摄位置信息可以为拍摄目标图像的拍摄装置所在的位置相对于世界坐标系的位置信息,相机投影模型可以为拍摄装置的小孔投影模型。本公开实施例中的拍摄装置可以为具有图像采集功能的设备,可以为单独的拍摄位置,也可以为终端设备上的拍摄模块,例如可以为手机上的拍摄模块。
本公开实施例中,根据目标区域的属性信息以及相机投影模型,确定目标图像的拍摄位置信息,包括:将目标区域的属性信息输入相机投影模型的投影方程中,确定世界坐标系下拍摄位置到目标形状物体的位移信息;根据世界坐标系下拍摄位置到目标形状物体的位移信息以及世界坐标系下目标形状物体的位置信息进行位置求解,得到拍摄位置信息。
上述世界坐标系下目标形状物体的位置信息可以为预先设置的一个固定值,为已知量,例如世界坐标系的原点可以设置在目标形状物体所在的位置,该位置信息的坐标为(0,0,0)。根据目标形状物体的位置、拍摄位置到目标形状物体的位移以及变换方程,即可确定拍摄位置信息。上述变换方程可以表示为W10=W20+W12,其中,W10表示世界坐标系下拍摄位置信息,W20表示世界坐标系下目标形状物体的位置信息,W12表示世界坐标系下拍摄位置到目标区域的位移信息。上述W10、W20和W12三个均为世界坐标系下的向量,具有方向和大小,W10为世界坐标系下拍摄位置到世界坐标系原点的向量,W20为世界坐标系下目标形状物体位置到世界坐标系原点的向量,W12为世界坐标系下拍摄位置到目标形状物体的向量。采用三角形定则进行W20和W12这两个向量加法,将两个向量依次首尾顺次相接,结果W10为第一个向量的起点指向最后一个向量的终点。
上述W12为未知量,通过将目标区域的位置信息和尺寸信息输入相机投影模型的投影方程中可以计算得到。投影方程中目标区域的位置信息与目标区域的尺寸信息、拍摄装置的内部参数、拍摄位置所在坐标系到世界坐标系的旋转矩阵、拍摄位置所在坐标系下世界坐标系的原点的位置信息相关。投影方程可以表示为p=π[K(R12*W20+T],其中,π表示基于目标区域的尺寸信息确定的系数,K表示拍摄装置的内部参数,具体可以包括拍摄位置焦距、畸变参数等拍摄装置内部的一些参数,R12表示拍摄位置所在坐标系到世界坐标系的旋转矩阵,p表示目标区域的位置信息,T表示拍摄位置所在坐标系下世界坐标系的原点的位置信息。
世界坐标系下拍摄位置到目标形状物体的位移信息W12与拍摄位置所在坐标系下世界坐标系的原点的位置信息T相关。由上述投影方程p=π[K(R12*W20+T]变换得到K -1*ratio*p=R12*W20+T,其中ratio=1/π,继续变换得到R21*K -1*ratio*p=W20+W01=W21,其中,R12=1/R21,T=W01/R21,W21表示世界坐标系下目标形状物体到拍摄位置的向量。由此可知W12=-1*ratio*R21*K -1*p,其中,W01表示世界坐标系下世界坐标系原点到拍摄位置的向量,也即拍摄位置信息的反向向量,R21表示从拍摄位置所在坐标系到世界坐标系的旋转矩阵的逆矩阵,ratio=1/π,当目标区域为椭圆形区域时,ratio的大小与椭圆长短轴之和成反比,也即椭圆形区域越大,ratio越小。
确定世界坐标系下拍摄位置到目标区域的位移信息W12之后,通过变换方程W10=W20+W12,可以得到世界坐标系下的拍摄位置信息W10。可选的,通过坐标系变换可以确定上述拍摄位置所在坐标系下世界坐标系的原点的位置T==-1*R12*W10,以及目标形状物体坐标系下的拍摄位置信息M=W10*R23,其中R23表示从世界坐标系到目标形状物体坐标系的旋转矩阵,拍摄位置所在坐标系的原点为拍摄位置,目标形状物体坐标系的原点为目标形状物体。
本公开实施例中,如果目标图像为一个视频中的图像帧时并且需要对视频中的每个图像帧确定拍摄位置,则对视频中的每个图像帧采用上述方式确定拍摄位置之后,可以使用卡尔曼滤波算法进行平滑操作,避免拍摄位置出现跳变,提升确定拍摄位置信息的准确性。
本公开实施例提供的拍摄位置确定方案,确定目标图像中目标区域的属性信息,其中,属性信息包括位置信息和尺寸信息,目标区域为目标图像中目标形状物体所在区域;根据目标区域的属性信息以及相机投影模型,确定目标图像的拍摄位置信息,其中,拍摄位置信息为拍摄位置相对于世界坐标系的位置信息。采用上述技术方案,通过一个图像中固定形状物体所在区域的位置、尺寸以及相机投影模型,可以实现拍摄位置的确定,相较于目前需要多张图像仅利用单维度信息的方案,采用两个维度的信息可以高效定位拍摄位置,提高了计算效率。
在一些实施例中,确定目标图像中目标区域的位置信息,包括:提取目标视频中的第一图像,并确定目标区域在第一图像中的第一位置信息;根据第一位置信息确定的初始特征点对目标图像进行光流跟踪,得到目标特征点;其中,目标图像为目标视频中第一图像的相邻视频帧;对目标特征点进行拟合,得到目标区域在目标图像中的位置信息。
其中,目标视频可以为包括目标图像的视频,可以为任意一个需要进行检测和跟踪的视频,可以为采用具有视频采集功能的设备拍摄得到的视频,也可以为从互联网或其他设备获取得到的视频,具体不限。目标图像可以为目标视频中的任意一个图像帧,第一图像可以为目标视频中与目标图像相邻的按照时间顺序的上一个图像帧。第一位置信息是指目标区域在第一图像中的位置信息,可以包括顶点坐标、中心点坐标等信息。
本公开实施例中,采用预设检测算法对第一图像进行目标区域的检测,确定目标区域在第一图像中的第一位置信息。上述预设检测算法可以为基于深度学习的检测算法或轮廓检测算法等,具体可以根据实际情况确定,例如目标区域为椭圆形区域时,预设检测算法可以为任意一种椭圆检测算法,采用椭圆检测算法对第一图像进行轮廓检测,然后对轮廓检测得到的椭圆轮廓进行拟合,得到目标区域在第一图像中的第一位置信息。
根据第一位置信息确定初始特征点,包括:根据第一位置信息对第一图像中的目标区域的边缘轮廓进行采样,确定初始特征点。可选的,根据第一位置信息对第一图像中的目标区域的边缘轮廓进行采样,确定初始特征点,包括:当目标区域为椭圆形区域,根据第一位置信息将目标区域在极坐标下进行表示得到椭圆轮廓;其中,第一位置信息包括目标区域在第一图像中的顶点坐标和/或中心点坐标;按照预设极角间隔在椭圆轮廓中进行采样,得到初始特征点。
之后对上述采样得到的初始特征点采用光流跟踪算法进行跟踪,保留跟踪成功的特征点为目标特征点,剔除跟踪失败的特征点。对目标特征点进行拟合,得到目标区域在目标 图像中的位置信息。
在一些实施例中,对目标特征点进行拟合,得到目标区域在目标图像中的位置信息,包括:如果目标特征点在目标区域的边缘轮廓上的覆盖范围大于或等于预设范围,则对目标特征点进行拟合,得到目标区域在目标图像中的位置信息。其中,预设范围是指预先设置的满足目标区域形状的范围,具体可以根据实际情况设置,例如预设范围可以为边缘轮廓的整个范围的3/4。具体的,确定目标特征点之后,可以判断目标点在目标区域的边缘轮廓上的覆盖范围是否大于或等于预设范围,若是,则采用拟合算法对目标特征点进行拟合,得到目标区域在在目标图像中的位置信息。如果目标特征点在目标区域的边缘轮廓上的覆盖范围小于预设范围,则可以直接采用预设检测算法对目标图像进行检测,确定目标区域在目标图像中的位置信息。
在一些实施例中,在确定目标区域在第一图像中的第一位置信息之后,还包括:确定目标图像相对于第一图像的变化参数;根据第一位置信息确定的初始特征点对目标图像进行光流跟踪,得到目标特征点,包括:如果基于变化参数确定目标不满足复用条件,则执行根据第一位置信息确定的初始特征点对目标图像进行光流跟踪,得到目标特征点。
其中,变换参数是指表征目标图像相对于第一图像的变化情况的参数。可选的,确定目标图像相对于第一图像的变化参数,可以包括:提取第一图像中的第一特征点;根据第一特征点对目标图像进行光流跟踪,确定第二特征点,将第二特征点与第一特征点之间的移动距离确定为变化参数。第一特征点可以为采用FAST角点检测算法对第一图像检测得到的角点。复用条件是指目标图像对目标区域的位置确定能否复用第一图像的具体判断条件。其中,变化阈值是指预先设置的阈值,可以根据实际情况设置,例如通过为目标图像中特征点相对于第一视频中对应的特征点的移动信息表征变化参数时,变换阈值可以为距离阈值设置为0.8。
具体的,确定目标图像相对于第一图像的变化参数之后,可以将变化参数与变化阈值进行比对,如果确定变化参数大于变化阈值,则可以确定目标图像不满足复用条件,需要重新跟踪,执行根据第一位置信息确定的初始特征点对目标图像进行光流跟踪,得到目标特征点;否则确定目标图像满足复用条件,则将第一位置信息确定为目标区域在目标图像中的位置信息。
在上述方案中,在对视频的一个图像帧的目标区域检测的基础上,通过特征点跟踪和拟合即可实现更加准确地确定目标图像中目标区域的位置,提升了对目标图像中目标区域位置确定的计算效率。并且,通过对相邻两个视频帧增加复用条件的判断,当视频中相邻两个视频帧的变化较大时,采用上述特征点跟踪和拟合实现目标区域的位置的确定;当视频中相邻两个视频帧的变化或差异较小时,则说明两个视频帧相似性较高,此时下一个视 频帧可以直接复用上一个视频帧的目标区域的位置信息,不用重新进行检测,节省了工作量,提高了计算效率。
图2为本公开实施例提供的另一种拍摄位置确定方法的流程示意图,本实施例在上述实施例的基础上,进一步优化了上述拍摄位置确定方法。如图2所示,该方法包括:
步骤201、确定目标图像中目标区域的属性信息。
属性信息包括位置信息和尺寸信息,目标区域为目标图像中目标形状物体所在区域。
可选的,确定目标图像中目标区域的位置信息,包括:提取目标视频中的第一图像,并确定目标区域在第一图像中的第一位置信息;根据第一位置信息确定的初始特征点对目标图像进行光流跟踪,得到目标特征点;其中,目标图像为目标视频中第一图像的相邻视频帧;对目标特征点进行拟合,得到目标区域在目标图像中的位置信息。
可选的,对目标特征点进行拟合,得到目标区域在目标图像中的位置信息,包括:如果目标特征点在目标区域的边缘轮廓上的覆盖范围大于或等于预设范围,则对目标特征点进行拟合,得到目标区域在目标图像中的位置信息。可选的,在确定目标区域在第一图像中的第一位置信息之后,还包括:确定目标图像相对于第一图像的变化参数;根据第一位置信息确定的初始特征点对目标图像进行光流跟踪,得到目标特征点,包括:如果基于变化参数确定目标不满足复用条件,则执行根据第一位置信息确定的初始特征点对目标图像进行光流跟踪,得到目标特征点。
步骤202、将目标区域的属性信息输入相机投影模型的投影方程中,确定世界坐标系下拍摄位置到目标形状物体的位移信息。
其中,拍摄位置信息为拍摄位置相对于世界坐标系的位置信息。
可选的,投影方程中目标区域的位置信息与目标区域的尺寸信息、拍摄装置的内部参数、拍摄位置所在坐标系到世界坐标系的旋转矩阵、拍摄位置所在坐标系下世界坐标系的原点的位置信息相关。可选的,世界坐标系下拍摄位置到目标形状物体的位移信息与拍摄位置所在坐标系下世界坐标系的原点的位置信息相关。
步骤203、根据世界坐标系下拍摄位置到目标形状物体的位移信息以及世界坐标系下目标形状物体的位置信息进行位置求解,得到拍摄位置信息。
接下来通过一个具体的示例对本公开实施例提供的拍摄位置确定方法进行进一步说明。假设目标形状物体为椭圆形状物体,则具体过程可以包括:1、使用任意一种椭圆检测算法得到目标图像中的椭圆形区域的中心点坐标和大小(长短轴)。2、根据小孔投影模型(也即相机投影模型)以及椭圆位置和大小对拍摄位置进行确定。投影方程可以表示为p=π[K(R12*W20+T],其中,π表示基于目标区域的尺寸信息确定的系数,K表示拍摄装置 的内部参数,R12表示从拍摄位置所在坐标系到世界坐标系的旋转矩阵,p表示椭圆形区域的中心点坐标,T表示拍摄位置所在坐标系下世界坐标系的原点到拍摄位置所在坐坐标系原点的向量,拍摄位置坐标系原点为拍摄位置。由上述投影方程p=π[K(R12*W20+T]变换得到K-1*ratio*p=R12*W20+T,其中ratio=1/π,继续变换得到R21*K-1*ratio*p=W20+W01=W21,其中,R12=1/R21,T=W01/R21,W21表示世界坐标系下目标形状物体到拍摄位置的向量。由此可知W12=-1*ratio*R21*K-1*p,其中,W01表示世界坐标系下世界坐标系原点到拍摄位置的向量,也即拍摄位置信息的反向向量,R21表示从拍摄位置所在坐标系到世界坐标系的旋转矩阵的逆矩阵,ratio=1/π,当目标区域为椭圆形区域时,ratio的大小与椭圆长短轴之和成反比,也即椭圆形区域越大,ratio越小。确定W12之后,可以将W12和W20输入变换方程中,得到世界坐标系下的拍摄位置W10。变换方程可以表示为W10=W20+W12,其中,W10表示世界坐标系下的拍摄位置信息,W20表示世界坐标系下目标形状物体的中心点坐标,也即目标形状物体的位置信息,W12表示世界坐标系下拍摄位置到目标区域的位移信息。
相较于目前方案仅利用单维度信息需要多张图像中的特征点进行确定,效率较低。本公开实施例的拍摄位置确定方式,由于图像中目标区域的属性信息包括了两个维度的信息,仅仅通过对一个图像中目标形状物体所在区域的检测和相机投影模型即可确定拍摄位置,目标形状可采用一个方程表征提供了更多的约束,提高了确定效率。
本公开实施例提供的拍摄位置确定方案,确定目标图像中目标区域的属性信息,其中,属性信息包括位置信息和尺寸信息,目标区域为目标图像中目标形状物体所在区域;根据目标区域的属性信息以及相机投影模型,确定目标图像的拍摄位置信息,其中,拍摄位置信息为拍摄位置相对于世界坐标系的位置信息。采用上述技术方案,通过一个图像中固定形状物体所在区域的位置、尺寸以及相机投影模型,可以实现拍摄位置的确定,相较于目前仅利用单维度信息需要多张图像的方案,采用两个维度的信息可以高效定位拍摄位置,提高了计算效率。
图3为本公开实施例提供的一种拍摄位置确定装置的结构示意图,该装置可由软件和/或硬件实现,一般可集成在电子设备中。如图3所示,该装置包括:
图像信息模块301,用于确定目标图像中目标区域的属性信息,其中,所述属性信息包括位置信息和尺寸信息,所述目标区域为所述目标图像中目标形状物体所在区域;
拍摄位置模块302,用于根据所述目标区域的属性信息以及相机投影模型,确定所述目标图像的拍摄位置信息,其中,所述拍摄位置信息为拍摄位置相对于世界坐标系的位置信息。
可选的,所述拍摄位置模块302用于:
将所述目标区域的属性信息输入所述相机投影模型的投影方程中,确定世界坐标系下所述拍摄位置到所述目标形状物体的位移信息;
根据所述世界坐标系下所述拍摄位置到所述目标形状物体的位移信息以及世界坐标系下所述目标形状物体的位置信息进行位置求解,得到所述拍摄位置信息。
可选的,所述投影方程中所述目标区域的位置信息与所述目标区域的尺寸信息、拍摄装置的内部参数、所述拍摄位置所在坐标系到所述世界坐标系的旋转矩阵、所述拍摄位置所在坐标系下所述世界坐标系的原点的位置信息相关。
可选的,所述世界坐标系下所述拍摄位置到所述目标形状物体的位移信息与所述拍摄位置所在坐标系下所述世界坐标系的原点的位置信息相关。
可选的,所述图像信息模块301用于:
提取目标视频中的第一图像,并确定目标区域在所述第一图像中的第一位置信息;
根据所述第一位置信息确定的初始特征点对所述目标图像进行光流跟踪,得到目标特征点;其中,所述目标图像为所述目标视频中所述第一图像的相邻视频帧;
对所述目标特征点进行拟合,得到所述目标区域在所述目标图像中的位置信息。
可选的,所述图像信息模块301用于:
如果所述目标特征点在所述目标区域的边缘轮廓上的覆盖范围大于或等于预设范围,则对所述目标特征点进行拟合,得到所述目标区域在所述目标图像中的位置信息。
可选的,所述图像信息模块301用于:
确定所述目标图像相对于所述第一图像的变化参数;
根据所述第一位置信息确定的初始特征点对所述目标图像进行光流跟踪,得到目标特征点,包括:
如果基于所述变化参数确定所述目标不满足复用条件,则执行所述根据所述第一位置信息确定的初始特征点对所述目标图像进行光流跟踪,得到目标特征点。
本公开实施例所提供的拍摄位置确定装置可执行本公开任意实施例所提供的拍摄位置确定方法,具备执行方法相应的功能模块和有益效果。
本公开实施例还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现本公开任意实施例所提供的拍摄位置确定方法。
当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本申请实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、 计算机网络、或者其他可编程装置。
图4为本公开实施例提供的一种电子设备的结构示意图。下面具体参考图4,其示出了适于用来实现本公开实施例中的电子设备400的结构示意图。本公开实施例中的电子设备400可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图4示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图4所示,电子设备400可以包括处理装置(例如中央处理器、图形处理器等)401,其可以根据存储在只读存储器(ROM)402中的程序或者从存储装置408加载到随机访问存储器(RAM)403中的程序而执行各种适当的动作和处理。在RAM 403中,还存储有电子设备400操作所需的各种程序和数据。处理装置401、ROM 402以及RAM 403通过总线404彼此相连。输入/输出(I/O)接口405也连接至总线404。
通常,以下装置可以连接至I/O接口405:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置406;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置407;包括例如磁带、硬盘等的存储装置408;以及通信装置409。通信装置409可以允许电子设备400与其他设备进行无线或有线通信以交换数据。虽然图4示出了具有各种装置的电子设备400,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置409从网络上被下载和安装,或者从存储装置408被安装,或者从ROM 402被安装。在该计算机程序被处理装置401执行时,执行本公开实施例的拍摄位置确定方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与 其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:确定目标图像中目标区域的属性信息,其中,所述属性信息包括位置信息和尺寸信息,所述目标区域为所述目标图像中目标形状物体所在区域;根据所述目标区域的属性信息以及相机投影模型,确定所述目标图像的拍摄位置信息,其中,所述拍摄位置信息为拍摄位置相对于世界坐标系的位置信息。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实 际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,本公开提供了一种拍摄位置确定方法,包括:
确定目标图像中目标区域的属性信息,其中,所述属性信息包括位置信息和尺寸信息,所述目标区域为所述目标图像中目标形状物体所在区域;
根据所述目标区域的属性信息以及相机投影模型,确定所述目标图像的拍摄位置信息,其中,所述拍摄位置信息为拍摄位置相对于世界坐标系的位置信息。
根据本公开的一个或多个实施例,本公开提供的拍摄位置确定方法中,根据所述目标区域的属性信息以及相机投影模型,确定所述目标图像的拍摄位置信息,包括:
将所述目标区域的属性信息输入所述相机投影模型的投影方程中,确定世界坐标系下所述拍摄位置到所述目标形状物体的位移信息;
根据所述世界坐标系下所述拍摄位置到所述目标形状物体的位移信息以及世界坐标系下所述目标形状物体的位置信息进行位置求解,得到所述拍摄位置信息。
根据本公开的一个或多个实施例,本公开提供的拍摄位置确定方法中,所述投影方程中所述目标区域的位置信息与所述目标区域的尺寸信息、拍摄装置的内部参数、所述拍摄 位置所在坐标系到所述世界坐标系的旋转矩阵、所述拍摄位置所在坐标系下所述世界坐标系的原点的位置信息相关。
根据本公开的一个或多个实施例,本公开提供的拍摄位置确定方法中,所述世界坐标系下所述拍摄位置到所述目标形状物体的位移信息与所述拍摄位置所在坐标系下所述世界坐标系的原点的位置信息相关。
根据本公开的一个或多个实施例,本公开提供的拍摄位置确定方法中,确定目标图像中目标区域的位置信息,包括:
提取目标视频中的第一图像,并确定目标区域在所述第一图像中的第一位置信息;
根据所述第一位置信息确定的初始特征点对所述目标图像进行光流跟踪,得到目标特征点;其中,所述目标图像为所述目标视频中所述第一图像的相邻视频帧;
对所述目标特征点进行拟合,得到所述目标区域在所述目标图像中的位置信息。
根据本公开的一个或多个实施例,本公开提供的拍摄位置确定方法中,对所述目标特征点进行拟合,得到所述目标区域在所述目标图像中的位置信息,包括:
如果所述目标特征点在所述目标区域的边缘轮廓上的覆盖范围大于或等于预设范围,则对所述目标特征点进行拟合,得到所述目标区域在所述目标图像中的位置信息。
根据本公开的一个或多个实施例,本公开提供的拍摄位置确定方法中,在确定目标区域在所述第一图像中的第一位置信息之后,还包括:
确定所述目标图像相对于所述第一图像的变化参数;
根据所述第一位置信息确定的初始特征点对所述目标图像进行光流跟踪,得到目标特征点,包括:
如果基于所述变化参数确定所述目标不满足复用条件,则执行所述根据所述第一位置信息确定的初始特征点对所述目标图像进行光流跟踪,得到目标特征点。
根据本公开的一个或多个实施例,本公开提供了一种拍摄位置确定装置,包括:
图像信息模块,用于确定目标图像中目标区域的属性信息,其中,所述属性信息包括位置信息和尺寸信息,所述目标区域为所述目标图像中目标形状物体所在区域;
拍摄位置模块,用于根据所述目标区域的属性信息以及相机投影模型,确定所述目标图像的拍摄位置信息,其中,所述拍摄位置信息为拍摄位置相对于世界坐标系的位置信息。
根据本公开的一个或多个实施例,本公开提供的拍摄位置确定装置中,所述拍摄位置模块用于:
将所述目标区域的属性信息输入所述相机投影模型的投影方程中,确定世界坐标系下所述拍摄位置到所述目标形状物体的位移信息;
根据所述世界坐标系下所述拍摄位置到所述目标形状物体的位移信息以及世界坐标系下所述目标形状物体的位置信息进行位置求解,得到所述拍摄位置信息。
根据本公开的一个或多个实施例,本公开提供的拍摄位置确定装置中,所述投影方程中所述目标区域的位置信息与所述目标区域的尺寸信息、拍摄装置的内部参数、所述拍摄位置所在坐标系到所述世界坐标系的旋转矩阵、所述拍摄位置所在坐标系下所述世界坐标系的原点的位置信息相关。
根据本公开的一个或多个实施例,本公开提供的拍摄位置确定装置中,所述世界坐标系下所述拍摄位置到所述目标形状物体的位移信息与所述拍摄位置所在坐标系下所述世界坐标系的原点的位置信息相关。
根据本公开的一个或多个实施例,本公开提供的拍摄位置确定装置中,所述图像信息模块用于:
提取目标视频中的第一图像,并确定目标区域在所述第一图像中的第一位置信息;
根据所述第一位置信息确定的初始特征点对所述目标图像进行光流跟踪,得到目标特征点;其中,所述目标图像为所述目标视频中所述第一图像的相邻视频帧;
对所述目标特征点进行拟合,得到所述目标区域在所述目标图像中的位置信息。
根据本公开的一个或多个实施例,本公开提供的拍摄位置确定装置中,所述图像信息模块用于:
如果所述目标特征点在所述目标区域的边缘轮廓上的覆盖范围大于或等于预设范围,则对所述目标特征点进行拟合,得到所述目标区域在所述目标图像中的位置信息。
根据本公开的一个或多个实施例,本公开提供的拍摄位置确定装置中,所述图像信息模块用于:
确定所述目标图像相对于所述第一图像的变化参数;
根据所述第一位置信息确定的初始特征点对所述目标图像进行光流跟踪,得到目标特征点,包括:
如果基于所述变化参数确定所述目标不满足复用条件,则执行所述根据所述第一位置信息确定的初始特征点对所述目标图像进行光流跟踪,得到目标特征点。
根据本公开的一个或多个实施例,本公开提供了一种电子设备,包括:
处理器;
用于存储所述处理器可执行指令的存储器;
所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开提供的任一所述的拍摄位置确定方法。
根据本公开的一个或多个实施例,本公开提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开提供的任一所述的拍摄位置确定方法。
根据本公开的一个或多个实施例,本公开提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现如本公开提供的任一所述的拍摄位置确定方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (11)

  1. 一种拍摄位置确定方法,其特征在于,包括:
    确定目标图像中目标区域的属性信息,其中,所述属性信息包括位置信息和尺寸信息,所述目标区域为所述目标图像中目标形状物体所在区域;
    根据所述目标区域的属性信息以及相机投影模型,确定所述目标图像的拍摄位置信息,其中,所述拍摄位置信息为拍摄位置相对于世界坐标系的位置信息。
  2. 根据权利要求1所述的方法,其特征在于,根据所述目标区域的属性信息以及相机投影模型,确定所述目标图像的拍摄位置信息,包括:
    将所述目标区域的属性信息输入所述相机投影模型的投影方程中,确定世界坐标系下所述拍摄位置到所述目标形状物体的位移信息;
    根据所述世界坐标系下所述拍摄位置到所述目标形状物体的位移信息以及世界坐标系下所述目标形状物体的位置信息进行位置求解,得到所述拍摄位置信息。
  3. 根据权利要求2所述的方法,其特征在于,所述投影方程中所述目标区域的位置信息与所述目标区域的尺寸信息、拍摄装置的内部参数、所述拍摄位置所在坐标系到所述世界坐标系的旋转矩阵、所述拍摄位置所在坐标系下所述世界坐标系的原点的位置信息相关。
  4. 根据权利要求3所述的方法,其特征在于,所述世界坐标系下所述拍摄位置到所述目标形状物体的位移信息与所述拍摄位置所在坐标系下所述世界坐标系的原点的位置信息相关。
  5. 根据权利要求1所述的方法,其特征在于,确定目标图像中目标区域的位置信息,包括:
    提取目标视频中的第一图像,并确定目标区域在所述第一图像中的第一位置信息;
    根据所述第一位置信息确定的初始特征点对所述目标图像进行光流跟踪,得到目标特征点;其中,所述目标图像为所述目标视频中所述第一图像的相邻视频帧;
    对所述目标特征点进行拟合,得到所述目标区域在所述目标图像中的位置信息。
  6. 根据权利要求5所述的方法,其特征在于,对所述目标特征点进行拟合,得到所述目标区域在所述目标图像中的位置信息,包括:
    如果所述目标特征点在所述目标区域的边缘轮廓上的覆盖范围大于或等于预设范围,则对所述目标特征点进行拟合,得到所述目标区域在所述目标图像中的位置信息。
  7. 根据权利要求5所述的方法,其特征在于,在确定目标区域在所述第一图像中的第一位置信息之后,还包括:
    确定所述目标图像相对于所述第一图像的变化参数;
    根据所述第一位置信息确定的初始特征点对所述目标图像进行光流跟踪,得到目标特征点,包括:
    如果基于所述变化参数确定所述目标不满足复用条件,则执行所述根据所述第一位置信息确定的初始特征点对所述目标图像进行光流跟踪,得到目标特征点。
  8. 一种拍摄位置确定装置,其特征在于,包括:
    图像信息模块,用于确定目标图像中目标区域的属性信息,其中,所述属性信息包括位置信息和尺寸信息,所述目标区域为所述目标图像中目标形状物体所在区域;
    拍摄位置模块,用于根据所述目标区域的属性信息以及相机投影模型,确定所述目标图像的拍摄位置信息,其中,所述拍摄位置信息为拍摄位置相对于世界坐标系的位置信息。
  9. 一种电子设备,其特征在于,所述电子设备包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现上述权利要求1-7中任一所述的拍摄位置确定方法。
  10. 一种计算机可读存储介质,其特征在于,所述存储介质存储有计算机程序,所述计算机程序用于执行上述权利要求1-7中任一所述的拍摄位置确定方法。
  11. 一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现如权利要求1-7中任一所述的拍摄位置确定方法。
PCT/CN2022/080916 2021-03-15 2022-03-15 一种拍摄位置确定方法、装置、设备及介质 WO2022194145A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/468,647 US20240005552A1 (en) 2021-03-15 2023-09-15 Target tracking method and apparatus, device, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110277835.1A CN115086541B (zh) 2021-03-15 2021-03-15 一种拍摄位置确定方法、装置、设备及介质
CN202110277835.1 2021-03-15

Related Child Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/080985 Continuation-In-Part WO2022194158A1 (zh) 2021-03-15 2022-03-15 一种目标跟踪方法、装置、设备及介质

Publications (1)

Publication Number Publication Date
WO2022194145A1 true WO2022194145A1 (zh) 2022-09-22

Family

ID=83240562

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/080916 WO2022194145A1 (zh) 2021-03-15 2022-03-15 一种拍摄位置确定方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN115086541B (zh)
WO (1) WO2022194145A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115526672B (zh) * 2022-11-23 2023-04-07 深圳市亲邻科技有限公司 广告投放照片审核方法、装置、介质及设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240289A (zh) * 2014-07-16 2014-12-24 崔岩 一种基于单个相机的三维数字化重建方法及系统
US20150199817A1 (en) * 2012-07-13 2015-07-16 Denso Corporation Position detection device and position detection program
CN108519088A (zh) * 2018-03-05 2018-09-11 华南理工大学 一种基于人工神经网络的可见光视觉定位方法
CN109190612A (zh) * 2018-11-12 2019-01-11 朱炳强 图像采集处理设备和图像采集处理方法
CN110807807A (zh) * 2018-08-01 2020-02-18 深圳市优必选科技有限公司 一种单目视觉的目标定位的图案、方法、装置及设备
CN111311681A (zh) * 2020-02-14 2020-06-19 北京云迹科技有限公司 视觉定位方法、装置、机器人及计算机可读存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08201021A (ja) * 1995-01-23 1996-08-09 Mazda Motor Corp キャリブレーション方法
JP4885584B2 (ja) * 2006-03-23 2012-02-29 株式会社スペースビジョン レンジファインダ校正方法及び装置
CN109685855B (zh) * 2018-12-05 2022-10-14 长安大学 一种道路云监控平台下的摄像机标定优化方法
CN109579701A (zh) * 2018-12-17 2019-04-05 吉林大学 基于结构光视觉测量系统的椭圆中心投影畸变消除方法
CN110335292B (zh) * 2019-07-09 2021-04-30 北京猫眼视觉科技有限公司 一种基于图片跟踪实现模拟场景跟踪的方法、系统及终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150199817A1 (en) * 2012-07-13 2015-07-16 Denso Corporation Position detection device and position detection program
CN104240289A (zh) * 2014-07-16 2014-12-24 崔岩 一种基于单个相机的三维数字化重建方法及系统
CN108519088A (zh) * 2018-03-05 2018-09-11 华南理工大学 一种基于人工神经网络的可见光视觉定位方法
CN110807807A (zh) * 2018-08-01 2020-02-18 深圳市优必选科技有限公司 一种单目视觉的目标定位的图案、方法、装置及设备
CN109190612A (zh) * 2018-11-12 2019-01-11 朱炳强 图像采集处理设备和图像采集处理方法
CN111311681A (zh) * 2020-02-14 2020-06-19 北京云迹科技有限公司 视觉定位方法、装置、机器人及计算机可读存储介质

Also Published As

Publication number Publication date
CN115086541A (zh) 2022-09-20
CN115086541B (zh) 2023-12-22

Similar Documents

Publication Publication Date Title
CN111127563A (zh) 联合标定方法、装置、电子设备及存储介质
CN110728622B (zh) 鱼眼图像处理方法、装置、电子设备及计算机可读介质
WO2022033111A1 (zh) 图像信息提取方法、训练方法及装置、介质和电子设备
WO2022171036A1 (zh) 视频目标追踪方法、视频目标追踪装置、存储介质及电子设备
WO2022028254A1 (zh) 定位模型优化方法、定位方法和定位设备
WO2022116947A1 (zh) 视频裁剪方法、装置、存储介质及电子设备
CN114993328B (zh) 车辆定位评估方法、装置、设备和计算机可读介质
CN114399588B (zh) 三维车道线生成方法、装置、电子设备和计算机可读介质
WO2022194145A1 (zh) 一种拍摄位置确定方法、装置、设备及介质
WO2022028253A1 (zh) 定位模型优化方法、定位方法和定位设备以及存储介质
CN112257598B (zh) 图像中四边形的识别方法、装置、可读介质和电子设备
WO2022194158A1 (zh) 一种目标跟踪方法、装置、设备及介质
WO2023020268A1 (zh) 一种手势识别方法、装置、设备及介质
CN115086538B (zh) 一种拍摄位置确定方法、装置、设备及介质
WO2022105622A1 (zh) 图像分割方法、装置、可读介质及电子设备
CN112037280A (zh) 物体距离测量方法及装置
WO2022194157A1 (zh) 一种目标跟踪方法、装置、设备及介质
CN114863025B (zh) 三维车道线生成方法、装置、电子设备和计算机可读介质
US20240005552A1 (en) Target tracking method and apparatus, device, and medium
WO2023025181A1 (zh) 图像识别方法、装置和电子设备
CN113808050B (zh) 3d点云的去噪方法、装置、设备及存储介质
CN111860209B (zh) 手部识别方法、装置、电子设备及存储介质
CN112634934B (zh) 语音检测方法及装置
WO2024036764A1 (zh) 一种图像处理方法、装置、设备及介质
CN117409072A (zh) 一种描述子确定方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22770498

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22770498

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21-02-2024)