CN115086541B - Shooting position determining method, device, equipment and medium - Google Patents

Shooting position determining method, device, equipment and medium Download PDF

Info

Publication number
CN115086541B
CN115086541B CN202110277835.1A CN202110277835A CN115086541B CN 115086541 B CN115086541 B CN 115086541B CN 202110277835 A CN202110277835 A CN 202110277835A CN 115086541 B CN115086541 B CN 115086541B
Authority
CN
China
Prior art keywords
target
information
coordinate system
image
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110277835.1A
Other languages
Chinese (zh)
Other versions
CN115086541A (en
Inventor
郭亨凯
杜思聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202110277835.1A priority Critical patent/CN115086541B/en
Priority to PCT/CN2022/080916 priority patent/WO2022194145A1/en
Publication of CN115086541A publication Critical patent/CN115086541A/en
Priority to US18/468,647 priority patent/US20240005552A1/en
Application granted granted Critical
Publication of CN115086541B publication Critical patent/CN115086541B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Abstract

The embodiment of the disclosure relates to a shooting position determining method, a shooting position determining device, shooting position determining equipment and a shooting position determining medium, wherein the shooting position determining method comprises the following steps: determining attribute information of a target area in a target image, wherein the attribute information comprises position information and size information, and the target area is an area where a target shape object in the target image is located; and determining shooting position information of the target image according to the attribute information of the target area and the camera projection model, wherein the shooting position information is position information of the shooting position relative to a world coordinate system. By adopting the technical scheme, the shooting position can be determined by fixing the position and the size of the area where the object in the shape is located in one image and the projection model of the camera, and compared with the scheme that only single-dimensional information is needed for a plurality of images at present, the shooting position can be efficiently positioned by adopting the information in two dimensions, and the calculation efficiency is improved.

Description

Shooting position determining method, device, equipment and medium
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to a shooting position determining method, device, equipment and medium.
Background
With the continuous development of image processing technology, image-based end products are increasing.
At present, when the shooting position is determined based on an image, the defects of low calculation efficiency and low speed exist.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides a shooting position determining method, apparatus, device and medium.
The embodiment of the disclosure provides a shooting position determining method, which comprises the following steps:
determining attribute information of a target area in a target image, wherein the attribute information comprises position information and size information, and the target area is an area where a target shape object in the target image is located;
and determining shooting position information of the target image according to the attribute information of the target area and a camera projection model, wherein the shooting position information is position information of a shooting position relative to a world coordinate system.
The embodiment of the disclosure also provides a shooting position determining device, which comprises:
the image information module is used for determining attribute information of a target area in a target image, wherein the attribute information comprises position information and size information, and the target area is an area where a target shape object in the target image is located;
And the shooting position module is used for determining shooting position information of the target image according to the attribute information of the target area and a camera projection model, wherein the shooting position information is position information of a shooting position relative to a world coordinate system.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing the processor-executable instructions; the processor is configured to read the executable instructions from the memory and execute the instructions to implement the shooting position determining method provided by the embodiment of the present disclosure.
The present disclosure also provides a computer-readable storage medium storing a computer program for executing the photographing position determining method as provided by the embodiments of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages: according to the shooting position determining scheme provided by the embodiment of the disclosure, attribute information of a target area in a target image is determined, wherein the attribute information comprises position information and size information, and the target area is an area where a target shape object in the target image is located; and determining shooting position information of the target image according to the attribute information of the target area and the camera projection model, wherein the shooting position information is position information of the shooting position relative to a world coordinate system. By adopting the technical scheme, the shooting position can be determined by fixing the position and the size of the area where the object in the shape is located in one image and the projection model of the camera, and compared with the scheme that only single-dimensional information is needed for a plurality of images at present, the shooting position can be efficiently positioned by adopting the information in two dimensions, and the calculation efficiency is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a flowchart of a shooting position determining method according to an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating another shooting position determining method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a photographing position determining apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Fig. 1 is a flowchart of a method for determining a photographing position according to an embodiment of the present disclosure, where the method may be performed by a photographing position determining apparatus, and the apparatus may be implemented in software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 1, the method includes:
step 101, determining attribute information of a target area in a target image, wherein the attribute information comprises position information and size information, and the target area is an area where a target shape object in the target image is located.
The target image may be any image captured by a capturing device that needs to determine a position, may be an image captured in real time, or may be any image frame in a video captured in real time, and is not particularly limited. The target area may be an area where an object with a target shape in the target image is located, that is, an area with a target shape, where the target shape is a shape that can be represented by an equation, for example, the target shape may include an ellipse, a circle, and the like.
The position information may be information capable of characterizing a position of the target region in the target image, and specifically may include information such as vertex coordinates, center point coordinates, and a size of the target region in the target image. The size information refers to size information of the target area. For example, when the target area is an elliptical area, the attribute information may include the center point coordinates of the elliptical area, the minor axis size, and the like.
In the embodiment of the disclosure, after the target image is acquired, any detection mode may be used to determine the position information and the size information of the target area in the target image, for example, a preset detection algorithm or a feature point tracking algorithm may be used to determine the position information.
Step 102, determining shooting position information of the target image according to the attribute information of the target area and the camera projection model, wherein the shooting position information is position information of the shooting position relative to a world coordinate system.
The shooting position information may be position information of a position of a shooting device for shooting a target image relative to a world coordinate system, and the camera projection model may be an aperture projection model of the shooting device. The photographing device in the embodiment of the disclosure may be a device with an image acquisition function, may be an independent photographing position, or may be a photographing module on a terminal device, for example, may be a photographing module on a mobile phone.
In an embodiment of the present disclosure, determining shooting position information of a target image according to attribute information of the target area and a camera projection model includes: inputting attribute information of a target area into a projection equation of a camera projection model, and determining displacement information from a shooting position to a target shape object under a world coordinate system; and carrying out position solving according to the displacement information from the shooting position to the target shape object under the world coordinate system and the position information of the target shape object under the world coordinate system to obtain shooting position information.
The position information of the target shape object in the world coordinate system may be a fixed value set in advance, and be a known value, for example, the origin of the world coordinate system may be set at the position where the target shape object is located, and the coordinates of the position information are (0, 0). And determining shooting position information according to the position of the target shape object, the displacement from the shooting position to the target shape object and a transformation equation. The above transformation equation may be expressed as w10=w20+w12, where W10 represents the shooting position information in the world coordinate system, W20 represents the position information of the target-shaped object in the world coordinate system, and W12 represents the displacement information of the shooting position to the target area in the world coordinate system. The three above-mentioned W10, W20 and W12 are vectors in the world coordinate system, have directions and sizes, W10 is a vector from the shooting position in the world coordinate system to the origin of the world coordinate system, W20 is a vector from the target shape object position in the world coordinate system to the origin of the world coordinate system, and W12 is a vector from the shooting position in the world coordinate system to the target shape object. And adding two vectors W20 and W12 by adopting a triangle rule, and sequentially connecting the two vectors end to end in sequence, wherein the result W10 is that the starting point of the first vector points to the ending point of the last vector.
The above-mentioned W12 is an unknown quantity, and can be calculated by inputting the position information and the size information of the target area into the projection equation of the camera projection model. The position information of the target area in the projection equation is related to the size information of the target area, the internal parameters of the shooting device, the rotation matrix from the coordinate system of the shooting position to the world coordinate system, and the position information of the origin of the world coordinate system under the coordinate system of the shooting position. The projection equation may be expressed as p=pi [ K (r12×w20+t ], where pi represents a coefficient determined based on the size information of the target area, K represents an internal parameter of the photographing apparatus, and may specifically include some parameters inside the photographing apparatus such as a focal length and a distortion parameter of the photographing position, R12 represents a rotation matrix from a coordinate system where the photographing position is located to a world coordinate system, p represents position information of the target area, and T represents position information of an origin of the world coordinate system where the photographing position is located.
The displacement information W12 of the shooting position to the target-shaped object in the world coordinate system is correlated with the position information T of the origin of the world coordinate system in the coordinate system where the shooting position is located. From the projection equation p=pi [ K (r12×w20+t]Transforming to obtain K -1 * ratio p=r12×w20+t, where ratio=1/pi, and continuing the transformation to obtain r21×k -1 * ratio p=w20+w01=w21, where r12=1/r21, t=w01/r21W21 represents a vector from the target shape object to the shooting position in the world coordinate system. From this, we can see that w12= -1 x ratio r21 x k -1 * p, wherein W01 represents a vector from the origin of the world coordinate system to the shooting position in the world coordinate system, that is, an inverse vector of shooting position information, R21 represents an inverse matrix of a rotation matrix from the coordinate system where the shooting position is located to the world coordinate system, ratio=1/pi, and when the target area is an elliptical area, the size of the ratio is inversely proportional to the sum of the major and minor axes of the ellipse, that is, the larger the elliptical area, the smaller the ratio.
After determining the displacement information W12 of the shooting position to the target area in the world coordinate system, the shooting position information W10 in the world coordinate system can be obtained by transforming equation w10=w20+w12. Optionally, the position t= = -1×r12×w10 of the origin of the world coordinate system in the coordinate system where the shooting position is located may be determined by coordinate system transformation, and the shooting position information m=w10×r23 of the target shape object coordinate system may be determined, where R23 represents a rotation matrix from the world coordinate system to the target shape object coordinate system, the origin of the coordinate system where the shooting position is located is the shooting position, and the origin of the target shape object coordinate system is the target shape object.
In the embodiment of the disclosure, if the target image is an image frame in one video and the shooting position needs to be determined for each image frame in the video, after the shooting position is determined for each image frame in the video in the above manner, a kalman filtering algorithm can be used for performing smoothing operation, so that jump of the shooting position is avoided, and accuracy of determining shooting position information is improved.
According to the shooting position determining scheme provided by the embodiment of the disclosure, attribute information of a target area in a target image is determined, wherein the attribute information comprises position information and size information, and the target area is an area where a target shape object in the target image is located; and determining shooting position information of the target image according to the attribute information of the target area and the camera projection model, wherein the shooting position information is position information of the shooting position relative to a world coordinate system. By adopting the technical scheme, the shooting position can be determined by fixing the position and the size of the area where the shape object is located in one image and the camera projection model, and compared with the scheme that a plurality of images are needed to use only single-dimensional information at present, the shooting position can be efficiently positioned by adopting the information of two dimensions, and the calculation efficiency is improved.
In some embodiments, determining location information of a target region in a target image includes: extracting a first image in a target video, and determining first position information of a target area in the first image; performing optical flow tracking on the target image according to the initial feature points determined by the first position information to obtain target feature points; the target image is an adjacent video frame of a first image in the target video; fitting the target characteristic points to obtain the position information of the target region in the target image.
The target video may be a video including a target image, may be any video that needs to be detected and tracked, may be a video captured by a device with a video capturing function, or may be a video obtained from the internet or other devices, which is not particularly limited. The target image may be any one of image frames in the target video, and the first image may be a time-sequentially last image frame adjacent to the target image in the target video. The first position information refers to position information of the target area in the first image, and may include information such as vertex coordinates and center point coordinates.
In the embodiment of the disclosure, a preset detection algorithm is adopted to detect a target area of a first image, and first position information of the target area in the first image is determined. The preset detection algorithm may be a detection algorithm based on deep learning or a contour detection algorithm, and the like, and may specifically be determined according to an actual situation, for example, when the target area is an elliptical area, the preset detection algorithm may be any elliptical detection algorithm, and the elliptical detection algorithm is adopted to perform contour detection on the first image, and then an elliptical contour obtained by contour detection is fitted to obtain first position information of the target area in the first image.
Determining an initial feature point according to the first position information, including: and sampling the edge contour of the target area in the first image according to the first position information, and determining the initial characteristic points. Optionally, sampling an edge contour of the target area in the first image according to the first position information, and determining the initial feature point includes: when the target area is an elliptical area, representing the target area under polar coordinates according to the first position information to obtain an elliptical contour; wherein the first position information comprises vertex coordinates and/or center point coordinates of the target area in the first image; sampling is carried out in the elliptic contour according to the preset polar angle interval, and initial characteristic points are obtained.
And then tracking the initial characteristic points obtained by sampling by adopting an optical flow tracking algorithm, reserving the characteristic points successfully tracked as target characteristic points, and eliminating the characteristic points failed to be tracked. Fitting the target characteristic points to obtain the position information of the target region in the target image.
In some embodiments, fitting the target feature points to obtain location information of the target region in the target image includes: and if the coverage area of the target feature points on the edge contour of the target area is larger than or equal to a preset range, fitting the target feature points to obtain the position information of the target area in the target image. The preset range refers to a preset range satisfying the shape of the target area, and may be specifically set according to practical situations, for example, the preset range may be 3/4 of the entire range of the edge profile. Specifically, after the target feature points are determined, whether the coverage area of the target point on the edge contour of the target area is larger than or equal to a preset range or not can be judged, if yes, a fitting algorithm is adopted to fit the target feature points, and the position information of the target area in the target image is obtained. If the coverage area of the target feature points on the edge contour of the target area is smaller than the preset range, the target image can be directly detected by adopting a preset detection algorithm, and the position information of the target area in the target image is determined.
In some embodiments, after determining the first location information of the target area in the first image, further comprising: determining a change parameter of the target image relative to the first image; performing optical flow tracking on the target image according to the initial feature points determined by the first position information to obtain target feature points, including: and if the target does not meet the multiplexing condition based on the change parameters, executing the initial feature points determined according to the first position information to carry out optical flow tracking on the target image so as to obtain the target feature points.
Wherein the transformation parameters refer to parameters characterizing the change of the target image relative to the first image. Optionally, determining the change parameter of the target image relative to the first image may include: extracting a first characteristic point in a first image; and carrying out optical flow tracking on the target image according to the first characteristic points, determining second characteristic points, and determining the moving distance between the second characteristic points and the first characteristic points as a change parameter. The first feature point may be a corner obtained by detecting the first image by using a FAST corner detection algorithm. The multiplexing condition refers to a specific judgment condition for determining whether or not the first image can be multiplexed with the position of the target image to the target region. The change threshold is a preset threshold, and may be set according to practical situations, for example, when a change parameter is represented by movement information of a feature point in the target image relative to a corresponding feature point in the first video, the change threshold may be set to be a distance threshold of 0.8.
Specifically, after determining a change parameter of the target image relative to the first image, the change parameter and a change threshold value may be compared, if the change parameter is determined to be greater than the change threshold value, it may be determined that the target image does not meet a multiplexing condition, and re-tracking is required, and optical flow tracking is performed on the target image by executing an initial feature point determined according to the first position information, so as to obtain a target feature point; and if the target image is determined to meet the multiplexing condition, determining the first position information as the position information of the target area in the target image.
In the scheme, on the basis of detecting the target area of one image frame of the video, the position of the target area in the target image can be more accurately determined through feature point tracking and fitting, and the calculation efficiency of determining the position of the target area in the target image is improved. And through judging that the multiplexing condition is added to two adjacent video frames, when the two adjacent video frames in the video have large change, the position of the target area is determined by adopting the characteristic point tracking and fitting; when the change or difference of two adjacent video frames in the video is smaller, the similarity of the two video frames is higher, and the next video frame can directly multiplex the position information of the target area of the previous video frame at the moment without re-detection, so that the workload is saved, and the calculation efficiency is improved.
Fig. 2 is a flowchart of another photographing position determining method according to an embodiment of the present disclosure, where the photographing position determining method is further optimized based on the foregoing embodiment. As shown in fig. 2, the method includes:
step 201, determining attribute information of a target area in a target image.
The attribute information comprises position information and size information, and the target area is the area where the target shape object is located in the target image.
Optionally, determining the location information of the target area in the target image includes: extracting a first image in a target video, and determining first position information of a target area in the first image; performing optical flow tracking on the target image according to the initial feature points determined by the first position information to obtain target feature points; the target image is an adjacent video frame of a first image in the target video; fitting the target characteristic points to obtain the position information of the target region in the target image.
Optionally, fitting the target feature points to obtain position information of the target region in the target image, including: and if the coverage area of the target feature points on the edge contour of the target area is larger than or equal to a preset range, fitting the target feature points to obtain the position information of the target area in the target image. Optionally, after determining the first position information of the target area in the first image, the method further includes: determining a change parameter of the target image relative to the first image; performing optical flow tracking on the target image according to the initial feature points determined by the first position information to obtain target feature points, including: and if the target does not meet the multiplexing condition based on the change parameters, executing the initial feature points determined according to the first position information to carry out optical flow tracking on the target image so as to obtain the target feature points.
And 202, inputting attribute information of the target area into a projection equation of a projection model of the camera, and determining displacement information from the shooting position to the target shape object under the world coordinate system.
The shooting position information is position information of the shooting position relative to a world coordinate system.
Optionally, the position information of the target area in the projection equation is related to the size information of the target area, the internal parameters of the photographing device, a rotation matrix from the coordinate system of the photographing position to the world coordinate system, and the position information of the origin of the world coordinate system under the coordinate system of the photographing position. Alternatively, the displacement information of the shooting position to the target shape object in the world coordinate system is correlated with the position information of the origin of the world coordinate system in the coordinate system where the shooting position is located.
And 203, carrying out position solving according to the displacement information from the shooting position to the target shape object under the world coordinate system and the position information of the target shape object under the world coordinate system to obtain shooting position information.
The photographing position determining method provided by the embodiment of the present disclosure will be further described by way of a specific example. Assuming that the target shape object is an elliptical shape object, the specific process may include: 1. the coordinates and the size (long and short axes) of the center point of the elliptical region in the target image are obtained using any one of the ellipse detection algorithms. 2. The shooting position is determined based on the small-bore projection model (i.e., camera projection model) and the ellipse position and size. The projection equation may be expressed as p=pi [ K (r12=w20+t ], where pi represents a coefficient determined based on the size information of the target region, K represents an internal parameter of the photographing device, R12 represents a rotation matrix from the coordinate system where the photographing position is located to the world coordinate system, p represents a center point coordinate of the elliptical region, T represents a vector from the origin of the world coordinate system under the coordinate system where the photographing position is located to the origin of the coordinate system where the photographing position is located, the photographing position coordinate system origin is the photographing position. Namely, the position information of the target shape object, W12 represents the displacement information of the shooting position to the target area in the world coordinate system.
Compared with the prior art, the method has the advantages that only single-dimensional information is utilized, characteristic points in a plurality of images are needed to be determined, and efficiency is low. According to the shooting position determining mode, as the attribute information of the target area in the image comprises two-dimensional information, the shooting position can be determined only by detecting the area where the object with the target shape is located in one image and a camera projection model, the target shape can be characterized by adopting one equation, so that more constraints are provided, and the determining efficiency is improved.
According to the shooting position determining scheme provided by the embodiment of the disclosure, attribute information of a target area in a target image is determined, wherein the attribute information comprises position information and size information, and the target area is an area where a target shape object in the target image is located; and determining shooting position information of the target image according to the attribute information of the target area and the camera projection model, wherein the shooting position information is position information of the shooting position relative to a world coordinate system. By adopting the technical scheme, the shooting position can be determined by fixing the position and the size of the area where the object in the shape is located in one image and the projection model of the camera, and compared with the scheme that only single-dimensional information is needed for a plurality of images at present, the shooting position can be efficiently positioned by adopting the information in two dimensions, and the calculation efficiency is improved.
Fig. 3 is a schematic structural diagram of a shooting position determining apparatus according to an embodiment of the present disclosure, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 3, the apparatus includes:
an image information module 301, configured to determine attribute information of a target area in a target image, where the attribute information includes location information and size information, and the target area is an area where a target shape object in the target image is located;
and a shooting position module 302, configured to determine shooting position information of the target image according to attribute information of the target area and a camera projection model, where the shooting position information is position information of a shooting position relative to a world coordinate system.
Optionally, the shooting location module 302 is configured to:
inputting the attribute information of the target area into a projection equation of the camera projection model, and determining displacement information from the shooting position to the target-shaped object under a world coordinate system;
and carrying out position solving according to the displacement information from the shooting position to the target shape object under the world coordinate system and the position information of the target shape object under the world coordinate system to obtain the shooting position information.
Optionally, the position information of the target area in the projection equation is related to the size information of the target area, the internal parameters of the photographing device, a rotation matrix from the coordinate system where the photographing position is located to the world coordinate system, and the position information of the origin of the world coordinate system under the coordinate system where the photographing position is located.
Optionally, the displacement information from the shooting position to the target shape object in the world coordinate system is related to the position information of the origin of the world coordinate system in the coordinate system where the shooting position is located.
Optionally, the image information module 301 is configured to:
extracting a first image in a target video, and determining first position information of a target area in the first image;
performing optical flow tracking on the target image according to the initial feature points determined by the first position information to obtain target feature points; wherein the target image is an adjacent video frame of the first image in the target video;
fitting the target characteristic points to obtain the position information of the target region in the target image.
Optionally, the image information module 301 is configured to:
and if the coverage area of the target feature points on the edge contour of the target area is larger than or equal to a preset range, fitting the target feature points to obtain the position information of the target area in the target image.
Optionally, the image information module 301 is configured to:
determining a variation parameter of the target image relative to the first image;
performing optical flow tracking on the target image according to the initial feature points determined by the first position information to obtain target feature points, including:
and if the target does not meet the multiplexing condition based on the change parameter, executing the initial feature point determined according to the first position information to carry out optical flow tracking on the target image so as to obtain a target feature point.
The shooting position determining device provided by the embodiment of the disclosure can execute the shooting position determining method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the executing method.
The disclosed embodiments also provide a computer program product comprising a computer program/instructions which, when executed by a processor, implement the shooting position determination method provided by any of the embodiments of the present disclosure.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. Referring now in particular to fig. 4, a schematic diagram of an electronic device 400 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 400 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 4, the electronic device 400 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401, which may perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data necessary for the operation of the electronic device 400 are also stored. The processing device 401, the ROM 402, and the RAM403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
In general, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, magnetic tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 shows an electronic device 400 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communications device 409, or from storage 408, or from ROM 402. When the computer program is executed by the processing device 401, the above-described functions defined in the shooting position determination method of the embodiment of the present disclosure are performed.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining attribute information of a target area in a target image, wherein the attribute information comprises position information and size information, and the target area is an area where a target shape object in the target image is located; and determining shooting position information of the target image according to the attribute information of the target area and a camera projection model, wherein the shooting position information is position information of a shooting position relative to a world coordinate system.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, the present disclosure provides a photographing position determining method including:
determining attribute information of a target area in a target image, wherein the attribute information comprises position information and size information, and the target area is an area where a target shape object in the target image is located;
and determining shooting position information of the target image according to the attribute information of the target area and a camera projection model, wherein the shooting position information is position information of a shooting position relative to a world coordinate system.
According to one or more embodiments of the present disclosure, in the photographing position determining method provided by the present disclosure, determining photographing position information of the target image according to attribute information of the target area and a camera projection model includes:
inputting the attribute information of the target area into a projection equation of the camera projection model, and determining displacement information from the shooting position to the target-shaped object under a world coordinate system;
and carrying out position solving according to the displacement information from the shooting position to the target shape object under the world coordinate system and the position information of the target shape object under the world coordinate system to obtain the shooting position information.
According to one or more embodiments of the present disclosure, in the photographing position determining method provided by the present disclosure, the position information of the target area in the projection equation is related to the size information of the target area, the internal parameters of the photographing device, the rotation matrix from the coordinate system where the photographing position is located to the world coordinate system, and the position information of the origin of the world coordinate system under the coordinate system where the photographing position is located.
According to one or more embodiments of the present disclosure, in the photographing position determining method provided by the present disclosure, displacement information of the photographing position to the target shape object in the world coordinate system is related to position information of an origin of the world coordinate system in the coordinate system in which the photographing position is located.
According to one or more embodiments of the present disclosure, in a photographing position determining method provided by the present disclosure, determining position information of a target region in a target image includes:
extracting a first image in a target video, and determining first position information of a target area in the first image;
performing optical flow tracking on the target image according to the initial feature points determined by the first position information to obtain target feature points; wherein the target image is an adjacent video frame of the first image in the target video;
Fitting the target characteristic points to obtain the position information of the target region in the target image.
According to one or more embodiments of the present disclosure, in the photographing position determining method provided by the present disclosure, fitting the target feature points to obtain position information of the target region in the target image includes:
and if the coverage area of the target feature points on the edge contour of the target area is larger than or equal to a preset range, fitting the target feature points to obtain the position information of the target area in the target image.
According to one or more embodiments of the present disclosure, in the photographing position determining method provided by the present disclosure, after determining the first position information of the target area in the first image, the method further includes:
determining a variation parameter of the target image relative to the first image;
performing optical flow tracking on the target image according to the initial feature points determined by the first position information to obtain target feature points, including:
and if the target does not meet the multiplexing condition based on the change parameter, executing the initial feature point determined according to the first position information to carry out optical flow tracking on the target image so as to obtain a target feature point.
According to one or more embodiments of the present disclosure, the present disclosure provides a photographing position determining apparatus including:
the image information module is used for determining attribute information of a target area in a target image, wherein the attribute information comprises position information and size information, and the target area is an area where a target shape object in the target image is located;
and the shooting position module is used for determining shooting position information of the target image according to the attribute information of the target area and a camera projection model, wherein the shooting position information is position information of a shooting position relative to a world coordinate system.
According to one or more embodiments of the present disclosure, in the photographing position determining apparatus provided by the present disclosure, the photographing position module is configured to:
inputting the attribute information of the target area into a projection equation of the camera projection model, and determining displacement information from the shooting position to the target-shaped object under a world coordinate system;
and carrying out position solving according to the displacement information from the shooting position to the target shape object under the world coordinate system and the position information of the target shape object under the world coordinate system to obtain the shooting position information.
According to one or more embodiments of the present disclosure, in the photographing position determining apparatus provided by the present disclosure, the position information of the target area in the projection equation is related to the size information of the target area, the internal parameters of the photographing apparatus, a rotation matrix from the coordinate system where the photographing position is located to the world coordinate system, and the position information of the origin of the world coordinate system under the coordinate system where the photographing position is located.
According to one or more embodiments of the present disclosure, in the photographing position determining apparatus provided by the present disclosure, the displacement information of the photographing position to the target shape object in the world coordinate system is related to the position information of the origin of the world coordinate system in the coordinate system in which the photographing position is located.
According to one or more embodiments of the present disclosure, in the photographing position determining apparatus provided by the present disclosure, the image information module is configured to:
extracting a first image in a target video, and determining first position information of a target area in the first image;
performing optical flow tracking on the target image according to the initial feature points determined by the first position information to obtain target feature points; wherein the target image is an adjacent video frame of the first image in the target video;
Fitting the target characteristic points to obtain the position information of the target region in the target image.
According to one or more embodiments of the present disclosure, in the photographing position determining apparatus provided by the present disclosure, the image information module is configured to:
and if the coverage area of the target feature points on the edge contour of the target area is larger than or equal to a preset range, fitting the target feature points to obtain the position information of the target area in the target image.
According to one or more embodiments of the present disclosure, in the photographing position determining apparatus provided by the present disclosure, the image information module is configured to:
determining a variation parameter of the target image relative to the first image;
performing optical flow tracking on the target image according to the initial feature points determined by the first position information to obtain target feature points, including:
and if the target does not meet the multiplexing condition based on the change parameter, executing the initial feature point determined according to the first position information to carry out optical flow tracking on the target image so as to obtain a target feature point.
According to one or more embodiments of the present disclosure, the present disclosure provides an electronic device comprising:
A processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement any of the shooting position determining methods provided in the present disclosure.
According to one or more embodiments of the present disclosure, the present disclosure provides a computer-readable storage medium storing a computer program for executing any one of the photographing position determining methods provided by the present disclosure.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (8)

1. A shooting position determining method, characterized by comprising:
determining attribute information of a target area in a target image, wherein the attribute information comprises position information and size information, and the target area is an area where a target shape object in the target image is located;
determining shooting position information of the target image according to the attribute information of the target area and a camera projection model; wherein, the shooting position information is the position information of the shooting position relative to a world coordinate system;
wherein determining the shooting position information of the target image according to the attribute information of the target area and the camera projection model comprises the following steps:
inputting the attribute information of the target area into a projection equation of the camera projection model, and determining displacement information from the shooting position to the target-shaped object under a world coordinate system; the position information of the target area in the projection equation is related to the size information of the target area, the internal parameters of the shooting device, the rotation matrix from the coordinate system of the shooting position to the world coordinate system, and the position information of the origin of the world coordinate system under the coordinate system of the shooting position;
And carrying out position solving according to the displacement information from the shooting position to the target shape object under the world coordinate system and the position information of the target shape object under the world coordinate system to obtain the shooting position information.
2. The method according to claim 1, wherein the displacement information of the photographing position to the target-shaped object in the world coordinate system is correlated with the position information of the origin of the world coordinate system in the coordinate system where the photographing position is located.
3. The method of claim 1, wherein determining location information of a target region in a target image comprises:
extracting a first image in a target video, and determining first position information of a target area in the first image;
performing optical flow tracking on the target image according to the initial feature points determined by the first position information to obtain target feature points; wherein the target image is an adjacent video frame of the first image in the target video;
fitting the target characteristic points to obtain the position information of the target region in the target image.
4. A method according to claim 3, wherein fitting the target feature points to obtain the location information of the target region in the target image comprises:
And if the coverage area of the target feature points on the edge contour of the target area is larger than or equal to a preset range, fitting the target feature points to obtain the position information of the target area in the target image.
5. A method according to claim 3, further comprising, after determining the first location information of the target area in the first image:
determining a variation parameter of the target image relative to the first image;
performing optical flow tracking on the target image according to the initial feature points determined by the first position information to obtain target feature points, including:
if the target does not meet the multiplexing condition based on the change parameter, executing the initial feature point determined according to the first position information to carry out optical flow tracking on the target image so as to obtain a target feature point; the multiplexing condition is a judging condition for determining whether the first image can be multiplexed or not according to the position of the target image to the target area.
6. A photographing position determining apparatus, comprising:
the image information module is used for determining attribute information of a target area in a target image, wherein the attribute information comprises position information and size information, and the target area is an area where a target shape object in the target image is located;
The shooting position module is used for determining shooting position information of the target image according to the attribute information of the target area and a camera projection model, wherein the shooting position information is position information of a shooting position relative to a world coordinate system;
wherein, shooting position module is used for:
inputting the attribute information of the target area into a projection equation of the camera projection model, and determining displacement information from the shooting position to the target-shaped object under a world coordinate system; the position information of the target area in the projection equation is related to the size information of the target area, the internal parameters of the shooting device, the rotation matrix from the coordinate system of the shooting position to the world coordinate system, and the position information of the origin of the world coordinate system under the coordinate system of the shooting position;
and carrying out position solving according to the displacement information from the shooting position to the target shape object under the world coordinate system and the position information of the target shape object under the world coordinate system to obtain the shooting position information.
7. An electronic device, the electronic device comprising:
A processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the shooting position determining method of any one of the above claims 1 to 5.
8. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the shooting position determination method according to any one of the preceding claims 1 to 5.
CN202110277835.1A 2021-03-15 2021-03-15 Shooting position determining method, device, equipment and medium Active CN115086541B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110277835.1A CN115086541B (en) 2021-03-15 2021-03-15 Shooting position determining method, device, equipment and medium
PCT/CN2022/080916 WO2022194145A1 (en) 2021-03-15 2022-03-15 Photographing position determination method and apparatus, device, and medium
US18/468,647 US20240005552A1 (en) 2021-03-15 2023-09-15 Target tracking method and apparatus, device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110277835.1A CN115086541B (en) 2021-03-15 2021-03-15 Shooting position determining method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN115086541A CN115086541A (en) 2022-09-20
CN115086541B true CN115086541B (en) 2023-12-22

Family

ID=83240562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110277835.1A Active CN115086541B (en) 2021-03-15 2021-03-15 Shooting position determining method, device, equipment and medium

Country Status (2)

Country Link
CN (1) CN115086541B (en)
WO (1) WO2022194145A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115526672B (en) * 2022-11-23 2023-04-07 深圳市亲邻科技有限公司 Advertisement delivery photo auditing method, device, medium and equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08201021A (en) * 1995-01-23 1996-08-09 Mazda Motor Corp Calibration method
JP2007256091A (en) * 2006-03-23 2007-10-04 Space Vision:Kk Method and apparatus for calibrating range finder
CN104240289A (en) * 2014-07-16 2014-12-24 崔岩 Three-dimensional digitalization reconstruction method and system based on single camera
CN108519088A (en) * 2018-03-05 2018-09-11 华南理工大学 A kind of photopic vision localization method based on artificial neural network
CN109190612A (en) * 2018-11-12 2019-01-11 朱炳强 Image acquisition and processing equipment and image acquisition and processing method
CN109579701A (en) * 2018-12-17 2019-04-05 吉林大学 Elliptical center projection distortion removing method based on structure light vision measuring systems
CN109685855A (en) * 2018-12-05 2019-04-26 长安大学 A kind of camera calibration optimization method under road cloud monitor supervision platform
CN110335292A (en) * 2019-07-09 2019-10-15 北京猫眼视觉科技有限公司 It is a kind of to track the method and system for realizing simulated scenario tracking based on picture
CN110807807A (en) * 2018-08-01 2020-02-18 深圳市优必选科技有限公司 Monocular vision target positioning pattern, method, device and equipment
CN111311681A (en) * 2020-02-14 2020-06-19 北京云迹科技有限公司 Visual positioning method, device, robot and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6022834B2 (en) * 2012-07-13 2016-11-09 株式会社日本自動車部品総合研究所 Position detection apparatus and position detection program

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08201021A (en) * 1995-01-23 1996-08-09 Mazda Motor Corp Calibration method
JP2007256091A (en) * 2006-03-23 2007-10-04 Space Vision:Kk Method and apparatus for calibrating range finder
CN104240289A (en) * 2014-07-16 2014-12-24 崔岩 Three-dimensional digitalization reconstruction method and system based on single camera
CN108519088A (en) * 2018-03-05 2018-09-11 华南理工大学 A kind of photopic vision localization method based on artificial neural network
CN110807807A (en) * 2018-08-01 2020-02-18 深圳市优必选科技有限公司 Monocular vision target positioning pattern, method, device and equipment
CN109190612A (en) * 2018-11-12 2019-01-11 朱炳强 Image acquisition and processing equipment and image acquisition and processing method
CN109685855A (en) * 2018-12-05 2019-04-26 长安大学 A kind of camera calibration optimization method under road cloud monitor supervision platform
CN109579701A (en) * 2018-12-17 2019-04-05 吉林大学 Elliptical center projection distortion removing method based on structure light vision measuring systems
CN110335292A (en) * 2019-07-09 2019-10-15 北京猫眼视觉科技有限公司 It is a kind of to track the method and system for realizing simulated scenario tracking based on picture
CN111311681A (en) * 2020-02-14 2020-06-19 北京云迹科技有限公司 Visual positioning method, device, robot and computer readable storage medium

Also Published As

Publication number Publication date
WO2022194145A1 (en) 2022-09-22
CN115086541A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN110728622B (en) Fisheye image processing method, device, electronic equipment and computer readable medium
CN111222509B (en) Target detection method and device and electronic equipment
CN115086541B (en) Shooting position determining method, device, equipment and medium
CN111862351B (en) Positioning model optimization method, positioning method and positioning equipment
CN112907628A (en) Video target tracking method and device, storage medium and electronic equipment
CN115086538B (en) Shooting position determining method, device, equipment and medium
CN110555861B (en) Optical flow calculation method and device and electronic equipment
CN113642493B (en) Gesture recognition method, device, equipment and medium
CN115082516A (en) Target tracking method, device, equipment and medium
CN114419298A (en) Virtual object generation method, device, equipment and storage medium
CN112037280A (en) Object distance measuring method and device
CN112070903A (en) Virtual object display method and device, electronic equipment and computer storage medium
CN112418233A (en) Image processing method, image processing device, readable medium and electronic equipment
CN112668474B (en) Plane generation method and device, storage medium and electronic equipment
CN113808050B (en) Denoising method, device and equipment for 3D point cloud and storage medium
CN111368015B (en) Method and device for compressing map
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
CN115937010B (en) Image processing method, device, equipment and medium
CN111860209B (en) Hand recognition method, device, electronic equipment and storage medium
CN111583283B (en) Image segmentation method, device, electronic equipment and medium
CN115082515A (en) Target tracking method, device, equipment and medium
CN112906551A (en) Video processing method and device, storage medium and electronic equipment
CN116259041A (en) Signal lamp identification and grouping method, device, electronic equipment and storage medium
CN117906634A (en) Equipment detection method, device, equipment and medium
CN116797957A (en) Method and device for identifying key points, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant