WO2022135242A1 - Virtual positioning method and apparatus, and virtual positioning system - Google Patents

Virtual positioning method and apparatus, and virtual positioning system Download PDF

Info

Publication number
WO2022135242A1
WO2022135242A1 PCT/CN2021/138525 CN2021138525W WO2022135242A1 WO 2022135242 A1 WO2022135242 A1 WO 2022135242A1 CN 2021138525 W CN2021138525 W CN 2021138525W WO 2022135242 A1 WO2022135242 A1 WO 2022135242A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
camera
positioning
positioning data
virtual
Prior art date
Application number
PCT/CN2021/138525
Other languages
French (fr)
Chinese (zh)
Inventor
聂兰龙
Original Assignee
青岛千眼飞凤信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛千眼飞凤信息技术有限公司 filed Critical 青岛千眼飞凤信息技术有限公司
Publication of WO2022135242A1 publication Critical patent/WO2022135242A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/80

Definitions

  • the present application relates to the field of computer technology, and in particular, to a virtual positioning method and device, and a virtual positioning system.
  • Embodiments of the present application provide a virtual positioning method and device, and a virtual positioning system, so as to at least solve the technical problem in the related art that the application of image sensors is relatively simple and large-scale collaborative tracking cannot be performed by using image sensors.
  • a virtual positioning method including: acquiring target positioning data of a target in a first camera cluster, where the first camera cluster is a camera used in a positioning system, The positioning system is used to perform a target tracking task to generate positioning data; the target positioning data is sent to the positioning system, wherein the positioning system utilizes the target mapping model and the target positioning data to generate the target at Virtual positioning data in the second camera cluster, the target mapping model is used to describe the spatial mapping relationship between the first camera cluster and the second camera cluster; virtual positioning of the target is performed according to the virtual positioning data position.
  • the target mapping model is a mapping model obtained by the positioning system from a mapping model system, wherein the mapping model system generates a relationship between the positions of the cameras and the viewing angle in the predetermined application scene by initializing the modeling. the mapping model.
  • the virtual positioning method before performing virtual positioning on the target according to the virtual positioning data, further includes: verifying the virtual positioning data; wherein, verifying the virtual positioning data includes: : obtain the actual positioning data of the target in the second camera cluster; determine the similarity between the actual positioning data and the virtual positioning data by using a predetermined verification rule, so as to compare the virtual positioning data with the actual positioning data. Check the consistency before positioning the data.
  • the virtual positioning method further includes: in response to a relay tracking request signal, and determining the best sampling camera in the second camera cluster according to the virtual positioning data; setting the attribute of the best sampling camera as obtaining the updated first camera cluster from the first camera cluster; acquiring the image information stream collected by the updated first camera cluster; sending the image information stream to the positioning system, wherein the positioning system is based on The image information stream generates target positioning data for the target in the updated first camera cluster.
  • the method in response to the relay tracking request signal, includes: determining the distribution density of cameras in a predetermined application scenario where the first camera cluster and the second camera cluster are located; when it is determined that the distribution density is less than a predetermined value, in the When the target is located at a predetermined position on the edge of the viewing area of the first camera cluster, in response to the relay tracking request signal; or, when it is determined that the distribution density is not less than a predetermined value, when the target leaves the first camera cluster for viewing in the middle of the zone, in response to the relay tracking request signal.
  • responding to the relay tracking request signal includes: determining that the positioning accuracy of the target is lower than a predetermined threshold; and responding to the relay tracking request signal.
  • the virtual positioning method further includes: in response to an identification matching task, determining that the second camera cluster includes at least one camera of the target. obtaining frame image information of the at least one camera; identifying a feature identifier in the frame image information, and generating target identifier information of the target based on the feature identifier.
  • performing virtual positioning on the target according to the virtual positioning data includes: responding to a target query instruction of a terminal device, and acquiring target identification information of the target query instruction; combining the target identification information based on the target identification information
  • the target positioning data and/or the virtual positioning data are used to determine a viewfinder camera; and the video stream of the target collected by the viewfinder camera is fed back to the terminal device.
  • the virtual positioning method before performing virtual positioning on the target according to the virtual positioning data, further includes: in response to a tracking task request, determining a positioning calibration in the second camera cluster according to the virtual positioning data camera; acquiring frame image buffer information of the positioning calibration camera; generating calibration positioning data of the target in the positioning calibration camera based on the frame image buffer information; correcting the target positioning data according to the calibration positioning data.
  • the start condition of the tracking task request includes one of the following: the target is lost, the determination accuracy of the target is lower than a predetermined threshold, and the predetermined plan of the target tracking task.
  • the virtual positioning method further includes: determining a viewfinder camera according to a predetermined rule according to the target positioning data and/or the virtual positioning data; and sending frame image information collected by the viewfinder camera to a predetermined storage medium.
  • a virtual positioning device including: an obtaining unit configured to obtain target positioning data of a target in a first camera cluster, wherein the first camera cluster is a positioning system The camera used in the positioning system is used to perform a target tracking task to generate positioning data; a sending unit is configured to send the target positioning data to the positioning system, wherein the positioning system utilizes target mapping The model and the target positioning data generate virtual positioning data of the target in the second camera cluster, and the target mapping model is used to describe the spatial mapping relationship between the first camera cluster and the second camera cluster; A virtual positioning unit, configured to perform virtual positioning on the target according to the virtual positioning data.
  • the target mapping model is a mapping model obtained by the positioning system from a mapping model system, wherein the mapping model system generates a relationship between the positions of the cameras and the viewing angle in the predetermined application scene by initializing the modeling. the mapping model.
  • the virtual positioning device further includes: a verification unit, configured to verify the virtual positioning data before performing virtual positioning on the target according to the virtual positioning data; wherein, the verification unit , comprising: a first acquisition module, configured to acquire the actual positioning data of the target in the second camera cluster; a verification module, configured to use a predetermined verification rule to determine the actual positioning data and the virtual positioning data to verify the consistency between the virtual positioning data and the actual positioning data.
  • a verification unit configured to verify the virtual positioning data before performing virtual positioning on the target according to the virtual positioning data
  • the verification unit comprising: a first acquisition module, configured to acquire the actual positioning data of the target in the second camera cluster; a verification module, configured to use a predetermined verification rule to determine the actual positioning data and the virtual positioning data to verify the consistency between the virtual positioning data and the actual positioning data.
  • the virtual positioning device further includes: a first determining unit configured to respond to a relay tracking request signal and determine the best sampling camera in the second camera cluster according to the virtual positioning data; a setting unit configured to set In order to set the attribute of the best sampling camera as the first camera cluster, the updated first camera cluster is obtained; the acquiring unit is configured to acquire the image information stream collected by the updated first camera cluster; The sending unit is configured to send the image information stream to the positioning system, wherein the positioning system generates target positioning data of the target in the updated first camera cluster based on the image information stream .
  • the first determination unit includes: a first determination module, configured to determine the distribution density of cameras in the predetermined application scenarios where the first camera cluster and the second camera cluster are located; a second determination module, configured to determine In order to determine that the distribution density is less than a predetermined value, when the target is located at a predetermined position on the edge of the viewing area of the first camera cluster, in response to the relay tracking request signal; or, the third determining module is configured to determine the When the distribution density is not less than a predetermined value, when the target leaves the middle position of the viewing area of the first camera cluster, it responds to the relay tracking request signal.
  • the first determination unit includes: a fourth determination module configured to determine that the positioning accuracy of the target is lower than a predetermined threshold; and a response module configured to respond to the relay tracking request signal.
  • the virtual positioning device further includes: a second determining unit, configured to, after virtual positioning of the target according to the virtual positioning data, in response to an identification matching task, determine that the second camera cluster contains at least one camera of the target; the acquiring unit is configured to acquire the frame image information of the at least one camera; the first generating unit is configured to recognize the feature identifier in the frame image information, and based on the feature identifier Target identification information for the target is generated.
  • a second determining unit configured to, after virtual positioning of the target according to the virtual positioning data, in response to an identification matching task, determine that the second camera cluster contains at least one camera of the target
  • the acquiring unit is configured to acquire the frame image information of the at least one camera
  • the first generating unit is configured to recognize the feature identifier in the frame image information, and based on the feature identifier Target identification information for the target is generated.
  • the virtual positioning unit includes: a second acquisition module, configured to respond to a target query instruction of a terminal device, and acquire target identification information of the target query instruction; a fifth determination module, set to be based on the The target identification information is combined with the target positioning data and/or the virtual positioning data to determine a viewfinder camera; the feedback module is configured to feed back the video stream of the target collected by the viewfinder camera to the terminal device.
  • the virtual positioning device further includes: a third determining unit, configured to, in response to a tracking task request, determine the third determining unit according to the virtual positioning data before performing virtual positioning on the target according to the virtual positioning data.
  • a positioning calibration camera in a two-camera cluster the obtaining unit is configured to obtain the frame image cache information of the positioning calibration camera; the second generating unit is configured to generate the target at the positioning based on the frame image cache information
  • the calibration positioning data of the camera is calibrated; the correction unit is configured to correct the target positioning data according to the calibration positioning data.
  • the start condition of the tracking task request includes one of the following: the target is lost, the determination accuracy of the target is lower than a predetermined threshold, and the predetermined plan of the target tracking task.
  • the virtual positioning device further includes: a fourth determining unit, configured to determine a viewfinder camera according to a predetermined rule according to the target positioning data and/or the virtual positioning data; the sending unit, configured to The frame image information collected by the viewfinder camera is sent to a predetermined storage medium.
  • a fourth determining unit configured to determine a viewfinder camera according to a predetermined rule according to the target positioning data and/or the virtual positioning data
  • the sending unit configured to The frame image information collected by the viewfinder camera is sent to a predetermined storage medium.
  • a virtual positioning system is also provided, using any one of the virtual positioning methods described above.
  • a computer-readable storage medium includes a stored computer program, wherein when the computer program is run by a processor, the computer is controlled
  • the device where the storage medium is located executes the virtual positioning method described in any one of the above.
  • a processor is also provided, where the processor is configured to run a computer program, wherein when the computer program runs, the virtual positioning method described in any one of the above is executed.
  • the target positioning data of the target in the first camera cluster is obtained, wherein the first camera cluster is the camera used in the positioning system, and the positioning system is used to perform the target tracking task to generate positioning data.
  • Send the target positioning data to the positioning system wherein the positioning system uses the target mapping model and the target positioning data to generate virtual positioning data of the target in the second camera cluster, and the target mapping model is used to describe the first camera cluster and the second camera cluster
  • the mapping relationship in space; the virtual positioning of the target is carried out according to the virtual positioning data, and the virtual positioning method provided by the embodiment of the present application realizes the purpose of positioning the target through the coordination of the camera cluster, and improves the accuracy of positioning the target.
  • the technical effect of high degree is solved, and the technical problem that the application of the image sensor in the related art is relatively single, and the large-scale collaborative tracking cannot be carried out by using the image sensor.
  • FIG. 1 is a flowchart of a virtual positioning method according to an embodiment of the present application.
  • FIG. 2 is a sequence diagram 1 of a virtual positioning method according to an embodiment of the present application.
  • FIG. 3 is a second sequence diagram of a virtual positioning method according to an embodiment of the present application.
  • FIG. 4 is a sequence diagram 3 of a virtual positioning method according to an embodiment of the present application.
  • FIG. 5 is a sequence diagram 4 of a virtual positioning method according to an embodiment of the present application.
  • FIG. 6 is a sequence diagram 5 of a virtual positioning method according to an embodiment of the present application.
  • FIG. 7 is a sequence diagram 6 of a virtual positioning method according to an embodiment of the present application.
  • FIG. 8 is a sequence diagram 7 of a virtual positioning method according to an embodiment of the present application.
  • FIG. 9 is a sequence diagram 8 of a virtual positioning method according to an embodiment of the present application.
  • Figure 10 (a) is a schematic diagram 1 of a scenic spot according to an embodiment of the present application.
  • Figure 10 (b) is a schematic diagram 2 of a scenic spot according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a logistics transfer station according to an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a traffic monitoring area according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a kindergarten according to an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a subway operating company according to an embodiment of the present application.
  • FIG. 15 is a schematic diagram of a virtual positioning apparatus according to an embodiment of the present application.
  • a method embodiment of a virtual positioning method is provided. It should be noted that the steps shown in the flowchart of the accompanying drawings may be executed in a computer system such as a set of computer-executable instructions, and, Although a logical order is shown in the flowcharts, in some cases steps shown or described may be performed in an order different from that herein.
  • Fig. 1 is the flow chart of the virtual positioning method according to the embodiment of the present application, as shown in Fig. 1, this virtual positioning method comprises the following steps:
  • Step S102 acquiring target positioning data of the target in the first camera cluster, where the first camera cluster is a camera used in the positioning system, and the positioning system is used to perform a target tracking task to generate positioning data.
  • the first camera cluster here is a sampling camera of a tracking and positioning system (ie, a positioning system), and the tracking and positioning system is configured to perform a target tracking task to generate positioning data.
  • a tracking and positioning system ie, a positioning system
  • the tracking and positioning system is configured to perform a target tracking task to generate positioning data.
  • Step S104 sending the target positioning data to the positioning system, wherein the positioning system uses the target mapping model and the target positioning data to generate virtual positioning data of the target in the second camera cluster, and the target mapping model is used to describe the first camera cluster and the second camera cluster.
  • the mapping relationship of camera clusters in space is used to describe the first camera cluster and the second camera cluster.
  • the cameras in the second camera cluster and the cameras in the first camera cluster are fixed cameras, the position and angle of the cameras are fixed, and frame image information with a still background can be obtained;
  • the angle is fixed, and the zooming of the camera will not change the perspective relationship of image acquisition, but will only change the viewing area.
  • the position coordinates of the target in various zoom states can be calculated.
  • the lens distortion problem occurs in the zooming process of some cameras, especially the wide-angle camera and fisheye camera have obvious distortion, and the effect of zooming on the target positioning can be eliminated through the distortion correction technology. Therefore, on the premise that the camera zoom parameters, lens distortion parameters and the modeling initialization focal length state are known, the target positioning data and the virtual positioning data are not affected by the camera zoom factor and the lens distortion factor.
  • the second camera cluster may also include a chain image acquisition device, and a plurality of cameras of the chain image acquisition device are distributed in a chain in the same data transmission bus.
  • the tracking and positioning system may include one or more sets of target tracking algorithms, such as frame difference method, background compensation method, expectation maximization method, optical flow method, statistical model method, level set method and Parallel tracking and drawing PTAM (Parallel Tracking and Mapping, referred to as PTAM), automatic camera tracking system ACTS (Automatic Camera Tracking System, referred to as ACTS), etc., can also include deep neural networks, through machine learning and other means to achieve target detection Track and locate.
  • target tracking algorithms such as frame difference method, background compensation method, expectation maximization method, optical flow method, statistical model method, level set method and Parallel tracking and drawing PTAM (Parallel Tracking and Mapping, referred to as PTAM), automatic camera tracking system ACTS (Automatic Camera Tracking System, referred to as ACTS), etc.
  • ACTS Automatic Camera Tracking System
  • Step S106 perform virtual positioning on the target according to the virtual positioning data.
  • the target positioning data of the target in the first camera cluster can be obtained, wherein the first camera cluster is the camera used in the positioning system, and the positioning system is used to perform the target tracking task. to generate positioning data; send the target positioning data to the positioning system, wherein the positioning system uses the target mapping model and the target positioning data to generate virtual positioning data of the target in the second camera cluster, and the target mapping model is used to describe the first camera cluster and The mapping relationship of the second camera cluster in space; the virtual positioning of the target according to the virtual positioning data, the goal of coordinating the positioning of the target through the camera cluster is achieved, and the technical effect of improving the accuracy of the positioning of the target is achieved.
  • the virtual positioning method provided by the embodiments of the present application solves the technical problem that the application of the image sensor in the related art is relatively simple, and the image sensor cannot be used for large-scale collaborative tracking.
  • the target mapping model is a mapping model obtained by the positioning system from the mapping model system, wherein the mapping model system generates a mapping model including the relationship between the positions of each camera and the viewing angle in the predetermined application scene by means of initializing modeling the mapping model.
  • the camera cluster mapping model is a space model generated by the mapping model system through initialization modeling and includes the relationship between the positions of each camera in the application scene and the viewing angle, and describes the first camera cluster and the second camera cluster in the space. mutual mapping relationship.
  • the mapping model may be one of a two-dimensional model, a three-dimensional model, and a dynamic model; wherein, the two-dimensional model is basically obtained by analyzing the image information of the initialization modeling frame collected by each camera. The overlapping area is generated, and the two-dimensional model has the problems of low positioning accuracy and easy to be affected by the three-dimensional environment in the actual application process.
  • the three-dimensional model can be analyzed by The three or more common position points in the initialized modeling frame image information collected by the camera during the modeling initialization process are used to establish the three-dimensional function mapping relationship of each camera in the space model; the three-dimensional model can also be analyzed by analyzing a target in the modeling initialization process.
  • Modeling is performed using the method of position coordinates at different times; since the camera clusters in the embodiments of the present application are all fixed positions and fixed viewing angles, the background of each camera viewing area is also the same at different times, and the same target moves to more than three
  • the three-dimensional function mapping relationship of each camera in the space model can be constructed by analyzing the position coordinates of the target position in the framing area of each camera; among them, the dynamic model is the different space models corresponding to different conditions, and the position that can be standardized Repeated correspondence is a prerequisite for building a dynamic model. For example, subway trains will accurately stop at the same location when they arrive at the station.
  • the spatial mapping relationship between the cameras inside and outside the doors of subway trains is a prerequisite for the train to accurately park on the platform.
  • the location can be repeated and standardized.
  • different 3D models can be triggered and determined according to different elevator parking conditions, and the 2D or 3D mapping relationship of different camera clusters can be obtained.
  • FIG. 2 is a sequence diagram 1 of a virtual positioning method according to an embodiment of the present application.
  • the first camera cluster will send the collected image frame information stream to the service system, and the service system will send the frames of the first camera cluster to the service system.
  • the image information stream is sent to the tracking and positioning system.
  • the tracking and positioning system generates positioning data based on the frame image information flow, and returns the positioning data to the service system.
  • the service system will send the positioning data of the target in the first camera cluster to the virtual positioning system.
  • the positioning system will send a mapping model request to the mapping model system, and receive the target mapping model that the mapping model system requests feedback based on the mapping model. Virtual positioning data, and feedback the virtual positioning data to the service system.
  • the camera cluster mapping model can be extracted by acquiring the positioning data of the target in the first camera cluster; and virtual positioning data corresponding to the target in the second camera cluster can be generated based on the mapping model and the target positioning data.
  • the virtual positioning method may further include: verifying the virtual positioning data; wherein, verifying the virtual positioning data includes: Obtain the actual positioning data of the target in the second camera cluster; determine the similarity between the actual positioning data and the virtual positioning data by using a predetermined verification rule, so as to verify the consistency between the virtual positioning data and the actual positioning data.
  • the positioning data in the second camera cluster is target positioning data generated by a second tracking system
  • the second target positioning system is configured to sample the second camera cluster and perform target tracking to generate target positioning data
  • the second tracking and positioning system and the second camera cluster can be attached to a monitoring service system outside the service system, and the two service systems only perform limited data exchange during initial modeling and target verification.
  • the external camera cluster does not really exist in the local service system, but is just a virtual mapping camera in the mapping model. By verifying the consistency between the virtual positioning data and the real positioning data, the authenticity and effectiveness of target tracking can be verified, and strict process control of the target tracking task can be realized.
  • the virtual positioning method in addition to the virtual positioning method in FIG. 2 , the virtual positioning method also includes a process of verifying virtual positioning data; Specifically, as shown in FIG. 3 , the second camera cluster will send the collected frame image information stream to the second tracking and positioning system, and the second tracking and positioning system will generate actual positioning data based on the received frame image information stream (ie , target positioning data), and send the actual positioning data to the service system, and the service system can verify the virtual positioning data based on the actual positioning data.
  • the second camera cluster will send the collected frame image information stream to the second tracking and positioning system, and the second tracking and positioning system will generate actual positioning data based on the received frame image information stream (ie , target positioning data), and send the actual positioning data to the service system, and the service system can verify the virtual positioning data based on the actual positioning data.
  • the second tracking and positioning system receives the positioning data of the second camera cluster of the target, and verifies the virtual positioning data of the target and the positioning data of the second camera cluster according to a predetermined rule, and generates a verification of the positioning data of the target in the second camera cluster result.
  • the virtual positioning method further includes: in response to the relay tracking request signal, and determining the best sampling camera in the second camera cluster according to the virtual positioning data; and setting the attribute of the best sampling camera as The first camera cluster is to obtain the updated first camera cluster; the image information stream collected by the updated first camera cluster is acquired; the image information stream is sent to the positioning system, wherein the positioning system generates a target based on the image information stream after the update The object localization data in the first camera cluster of .
  • the best sampling camera in the second camera cluster can be determined according to the virtual positioning data of the second camera cluster in response to the relay tracking request of the target tracking task, and the best camera attribute can be converted into the first camera cluster ; wherein, after the best sampling camera attribute here is converted into the first camera cluster, it is used to replace the original tracking task on-site sampling camera, and relay the frame image information sampling of the target tracking task.
  • the starting condition of the target tracking task relay tracking request is that the target is located in the predetermined viewing area of the first camera cluster, and/or the target determination accuracy is lower than a threshold.
  • the method in response to the relay tracking request signal, includes: determining the distribution density of cameras in the predetermined application scene where the first camera cluster and the second camera cluster are located; when determining that the distribution density is less than a predetermined value, when the target is located in the viewing area of the first camera cluster.
  • the edge is at a predetermined position, in response to a relay tracking request signal; or, when it is determined that the distribution density is not less than a predetermined value, when the target leaves the middle position of the viewing area of the first camera cluster, in response to a relay tracking request signal.
  • the relay tracking request of the target tracking task may set the start standard of the relay tracking request according to the density of the camera site arrangement. For example, in a scene with a low camera density, the overlapping degree of the viewing areas of each camera is low, and a relay tracking request can be initiated when the tracked target is located in a predetermined range at the edge of the sampling area of the first camera cluster sampling camera; for example, in a scene with a high camera density , the framing area of each camera has a high degree of overlap, and it can be set that the relay tracking request is started when the tracked target leaves the predetermined range in the middle of the framing area of the sampling camera.
  • the camera density of the chain image acquisition device is very high, and a relay tracking request condition with a high starting standard can be set to realize the mirror-based target tracking record.
  • responding to the relay tracking request signal includes: determining that the positioning accuracy of the target is lower than a predetermined threshold; and responding to the relay tracking request signal.
  • FIG. 4 is a sequence diagram 3 of a virtual positioning method according to an embodiment of the present application. As shown in FIG. 4 , in addition to the virtual positioning method in FIG. 2 , it also includes the following steps: the service system starts a target tracking task relay tracking request, and then Determine the best sampled camera in the second camera cluster, and send the specified best sampled camera to the second camera cluster, the second camera cluster will convert the attributes of the best sampled camera to the first camera cluster, and receive the updated For the frame image information stream sent by the first camera cluster, the service system will send the received frame image information stream to the tracking and positioning system, and the tracking and positioning system will generate target positioning data according to the updated frame image information stream sent by the first camera cluster. , and send the target positioning data to the service system.
  • the continuity of the target tracking task can be ensured and the target positioning accuracy can be improved.
  • the virtual positioning method may further include: in response to the identification matching task, determining that the second camera cluster includes at least one camera of the target; Acquiring frame image information of at least one camera; identifying feature identifiers in the frame image information, and generating target identifier information of the target based on the feature identifiers.
  • the identification features in the frame image information can be identified as image features, for example, facial features of people, contour features of objects, color features, barcodes or two-dimensional codes, etc.; the feature identification can be a combination of multiple identifications Matching is associated with the same tracked target.
  • the color identification of the car can be associated with the color feature
  • the type identification of the car can be associated with the contour feature
  • the license plate number identification of the associated car can be associated with the license plate recognition.
  • non-image features such as RFID radio frequency features, sound features, visible light stroboscopic features or motion features, can also be matched and correlated.
  • the service system can set different starting conditions for the identification matching task, for example, the target matching task sets a fixed start interval, for example, in response to the target determination accuracy of the target tracking task being lower than the threshold, for example, in response to the target tracking A target in a task is lost, eg, in response to a user terminal's identity verification command, and so on.
  • the service system determines the camera including the target in the second camera cluster through the virtual positioning data of the second camera cluster, and the main operation requests the frame image buffer information of the camera including the target.
  • the identification verification task analyzes the frame image information of the cameras containing the target in the first camera cluster and the second camera cluster, generates target identification information, matches the tracked target, improves the accuracy of the tracking result, and reduces the target tracking task bias. Causes the tracking target and the target identification to be incorrectly associated and matched, which is beneficial to the retrieval query of the target.
  • FIG. 5 is a fourth sequence diagram of a virtual positioning method according to an embodiment of the present application.
  • the service system may start an identification matching task. , and based on the identification matching task to determine the camera that contains the target in the second camera cluster, then request the frame image information of the specified camera from the second camera cluster, and identify the identification features in the frame image information to generate the identification information of the target.
  • performing virtual positioning on the target according to the virtual positioning data includes: responding to a target query instruction of the terminal device, and obtaining target identification information of the target query instruction; combining the target positioning data and the target identification information based on the target identification information /or virtual positioning data to determine the viewfinder camera; and feed back the video stream of the target collected by the viewfinder camera to the terminal device.
  • the target query instruction of the user terminal can be responded to, and the target identifier of the query instruction can be matched; according to the positioning data of the first camera cluster and/or the virtual positioning data of the second camera cluster, the framing is determined according to predetermined rules lens; the frame image information collected by the viewfinder camera is sent to the user terminal for the user terminal to query.
  • the virtual positioning method further includes the following steps: the service system generates target identification information, And after receiving the matching queried target identifier of the user terminal, determine the preferred viewfinder camera; obtain the frame image information flow of the viewfinder camera in the determined first camera cluster, and/or, obtain the determined viewfinder camera in the second camera cluster The frame image information stream generated is further sent to the user terminal based on the generated video information stream; wherein, the user terminal will preferably locate the viewfinder camera in the first camera cluster or the second camera cluster.
  • the virtual positioning method provided in this embodiment of the present application may also respond to an instruction of switching the display angle of the user terminal, and according to the positioning data of the first camera cluster, and/or the second camera cluster
  • the viewfinder camera is switched according to a predetermined rule; the frame image information collected by the switched viewfinder camera is sent to the user terminal.
  • FIG. 7 is a sequence diagram 6 of a virtual positioning method according to an embodiment of the present application. As shown in FIG. 7 , the method is basically the same as that in FIG. 6 . Specifically, the service system in FIG.
  • the virtual positioning method may further include: in response to the tracking task request, determining a positioning calibration camera in the second camera cluster according to the virtual positioning data ; Obtain the frame image cache information of the positioning calibration camera; generate the calibration positioning data of the target positioning and calibrating the camera based on the frame image cache information; modify the target positioning data according to the calibration positioning data.
  • the target can be assisted in positioning; specifically, in response to the target tracking task request, the positioning calibration camera in the second camera cluster can be determined according to the virtual positioning data of the second camera cluster; Frame image cache information; generate the positioning data of the target positioning calibration camera according to the frame image cache information; correct the target positioning data according to the positioning data of the positioning calibration camera.
  • the starting condition of the tracking task request includes one of the following: the target is lost, the determination accuracy of the target is lower than a predetermined threshold, and the predetermined plan of the target tracking task.
  • FIG. 8 is a sequence diagram 7 of a virtual positioning method according to an embodiment of the present application.
  • the processing of the virtual positioning method includes the steps shown in FIG. 2 , and may also include the following steps: the service system is based on the target tracking task Start the assisted positioning request, determine the positioning calibration camera in the second camera cluster, request the frame image cache information of the specified camera from the second camera cluster, and obtain the frame image cache information fed back by the second camera cluster.
  • the service system is based on the frame image cache.
  • the service system corrects the target positioning data according to the positioning data of the target in the positioning calibration camera, so as to realize the recovery of target tracking; by assisting the positioning, it can solve the problem that the existing technology is blocked when tracking the target and other factors that affect the tracking When it is lost, it is difficult to continue tracking, and it is easy to lose the original tracking target; the assisted positioning request can also be the request plan task of the target tracking task. Camera frame image information to improve the accuracy of tracking results.
  • the virtual positioning method may further include: determining a viewfinder camera according to a predetermined rule according to target positioning data and/or virtual positioning data; and sending frame image information collected by the viewfinder camera to a predetermined storage medium.
  • a viewfinder camera may be determined according to a predetermined rule according to the positioning data of the first camera cluster and/or the virtual positioning data of the second camera cluster, and frame image information collected by the viewfinder camera is sent to the storage system.
  • FIG. 9 is a sequence diagram 8 of a virtual positioning method according to an embodiment of the present application.
  • the processing of the virtual positioning method includes, in addition to the steps shown in FIG. 2 , the following steps: generating a target identifier by using a service system information, and generate the best view camera based on the identification information, and select the acquisition cameras in the first camera cluster and the second camera cluster, and obtain the frame image information flow of the sampling camera, and generate the video information flow based on the obtained frame image information flow. , and send the generated video information stream to the user terminal.
  • FIG 10(a) is a schematic diagram 1 of a scenic spot according to an embodiment of the present application
  • the scenic spot provides a kind of MV art short film service for tourists
  • the service system of the scenic spot can automatically generate MV art with the tourists as the protagonists after recognizing the facial features of the tourists Short video.
  • the initial part of the MV art short film template is set with the video images of tourists walking in front of each scenic spot, and the service system searches the video images containing the facial features of the tourists in the video database of the camera collection group of each scenic spot. Insert the retrieved video clips containing facial features into the beginning of the MV art clip.
  • the end part of the MV art short template is set with the video images of tourists walking on the back of each scenic spot, and the service system cannot retrieve the back images of tourists without facial features through the existing technology.
  • the following content is the implementation process of the service system matching the virtual identification information of tourists' bodies for video images.
  • Fig. 10(b) is a schematic diagram 2 of a scenic spot according to an embodiment of the present application.
  • the corridors of the scenic spot are respectively equipped with three framing cameras camera1, camera2 and camera3, and the mapping model system includes three framing cameras 3D mapping model data for the camera.
  • the three-dimensional mapping model data can have various modeling methods.
  • a triangle template is placed between the three viewfinder cameras. According to the actual shape and size of the triangle template and the three corners of the triangle in the viewfinder image of each camera, in the viewfinder image
  • the corresponding function mapping relationship of the three cameras in the three-dimensional space can be constructed.
  • the process of initializing modeling of the mapping model system is not specifically limited in this embodiment of the present application.
  • camera1 and camera2 are set as the first camera cluster
  • camera3 is set as the second camera cluster.
  • the tracking and positioning system processes the frame image information collected by camera1 and camera2 in the first camera cluster to generate positioning data for tourists.
  • the service system matches the tourist identification information to the positioning data through facial feature recognition.
  • the service system here extracts the three-dimensional model data of the first camera cluster and the second camera cluster in the mapping model system, and obtains the three-dimensional mapping relationship of camera1, camera2 and camera3; and the service system extracts the tourist positioning data generated by the tracking and positioning system, combined with The three-dimensional mapping relationship is calculated to obtain the virtual positioning data corresponding to the tourists in camera3.
  • the service system can obtain the frame image information including the back pictures of tourists, so as to meet the material requirements of the MV art short film service. It is also possible to set different retrieval conditions for virtual positioning data to meet different material requirements. For example, if the virtual positioning data for retrieval is set to match the positioning of the tourist’s face in the middle of the viewfinder, the image material showing the upper part of the body can be obtained; set the retrieval virtual positioning If the data is consistent with the tourist's face being positioned in the predetermined interval above the outside of the viewfinder, the image material that only shows the feet and legs can be obtained.
  • FIG. 11 is a schematic diagram of a logistics transfer station according to an embodiment of the present application.
  • a material monitoring and handover service system is deployed in the logistics transfer station in the goods receiving and dispatching link, which can complete the tracking and monitoring system for materials in the transfer station and the transport vehicle monitoring system. Seamless handover can realize the full monitoring and traceability of materials in the logistics process.
  • the logistics transfer station and the transport vehicle are each deployed with a full-process monitoring system
  • the first tracking and positioning system in the two sets of monitoring systems is deployed at the logistics transfer station, which is the local tracking and positioning system of the material monitoring and handover service system
  • the second tracking and positioning system in the two sets of monitoring systems is deployed on the transport vehicle and is the flow tracking and positioning system of the material monitoring and handover service system.
  • the camera of the first camera cluster is installed at the position of the material conveying device of the logistics transfer station, and is configured as the sampling camera of the first tracking and positioning system, and the first tracking and positioning system is configured to generate the positioning data of the first camera cluster of the target; transportation;
  • a camera of the second camera cluster is installed in the cargo compartment of the vehicle, and is configured as a sampling camera of the second tracking and positioning system, and the second tracking and positioning system is configured to generate positioning data of the second camera cluster of the target.
  • the second camera cluster is an external camera of the local monitoring system of the transfer station, and its attribute in the local monitoring system of the transfer station is a local virtual camera, not a local entity camera; the attribute of the second camera cluster in the monitoring system of the transport vehicle is the first
  • the camera cluster is the local physical camera of the monitoring system of the transport vehicle.
  • first tracking and positioning system and the second tracking and positioning system match the identification information by scanning the barcode of the material.
  • the barcode of the material is generated by the logistics worker manually scanning the material label with the barcode scanner at the handover position of the material. .
  • the local mapping model system needs to perform initial modeling before monitoring material handover, and establish two-dimensional model data of the first camera cluster and the second camera cluster.
  • the two-dimensional mapping data is that the mapping model constructs the mapping relationship corresponding to each camera in the two-dimensional model by performing overlapping analysis on the image information collected by the first camera cluster and the second camera cluster.
  • the overlapping analysis of the image information in this embodiment is to analyze the partial overlapping area in the image information. As shown in the figure, most of the viewing area of the two viewfinder cameras is covered by obstructions such as the cargo door, and the two sets of images Only a small area of the information overlaps.
  • the influence of occluders on the modeling process should be avoided, and on the other hand, the influence of different brightness of the camera viewing environment inside and outside the vehicle on the modeling process should be avoided.
  • the material monitoring and handover service system also has the following functions: extracting the two-dimensional model data in the mapping model system, obtaining the two-dimensional mapping relationship between the first camera cluster and the second camera cluster; extracting the material positioning data generated by the first tracking and positioning system , Combined with the two-dimensional mapping relationship, the virtual positioning data corresponding to the material in the second camera cluster is obtained through calculation processing; the positioning data of the material in the second camera cluster generated by the second tracking and positioning system deployed on the transport vehicle is received, and the calibration Check the consistency between the positioning data of the material in the second camera cluster and the virtual positioning data. If the consistency is greater than the predetermined threshold, it is determined that the process of material handover in the two monitoring systems is complete, and the material monitoring and handover system is determined. The local tracking and positioning system and the flow tracking and positioning system are verified by material handover. Two sets of tracking and positioning systems mutually verify the tracking thread of the material, which is convenient for the synchronous retrieval of material traceability.
  • FIG. 12 is a schematic diagram of a traffic monitoring area according to an embodiment of the present application. As shown in the figure, the urban traffic department installs intensive cameras in key traffic monitoring areas to implement tracking monitoring of road vehicles.
  • the cameras are dense, and each vehicle is simultaneously in the viewing area of multiple cameras.
  • the service system samples the tracking task image from one of the cameras of the vehicle in the viewing area according to a predetermined rule, and configures the attribute of the sampling camera as the first camera cluster; The remaining cameras are configured as the second camera cluster of the tracking task thread.
  • the service system analyzes the characteristics of the tracked target based on the collected images, and can generate three types of target identifications: vehicle color, vehicle type and vehicle number plate. For example, the service system can match the three target identifiers of "NO.002 red car”, “NO.005 car” and "88888 license plate” to the target at the same time by identifying the features of the sampled images. The service system can use these three identification information Retrieve the monitoring records of the vehicle.
  • the mapping model system in the service system generates the two-dimensional model data of all cameras by initializing the modeling.
  • the corresponding mapping relationship in the 2D model is the mapping model system in the service system.
  • the virtual positioning data corresponding to the tracked vehicle in the second camera cluster is obtained through computational processing. . Due to the dense number of cameras, the target of the tracked vehicle is matched with the virtual positioning data of multiple cameras at the same time.
  • the service system determines the best sampling camera in the second camera cluster according to the virtual positioning data, and converts the camera attributes into the first camera cluster, which is used to replace the original tracking task thread Sampling the camera to sample the frame image information of the relay target tracking task. Through relay tracking, the continuity of target tracking tasks can be ensured.
  • the identification matching task of the service system is started every 5 seconds, which reduces the deviation of the target tracking task and causes the tracking target and the target identification to be erroneously associated and matched.
  • the service system determines the camera including the target in the second camera cluster through the virtual positioning data of the second camera cluster, and the main operator requests the frame image information of the camera including the target.
  • the identification verification task analyzes the frame image information of the cameras containing the target in the first camera cluster and the second camera cluster, generates target identification information, matches the tracked target, and improves the accuracy of the tracking result.
  • FIG. 13 is a schematic diagram of a kindergarten according to an embodiment of the present application. As shown in the figure, the kindergarten provides parents with a real-time video service for viewing children's activities from multiple perspectives. All public places of the kindergarten are equipped with chain image acquisition devices.
  • the distance between each camera in the chain image acquisition device is 0.2 meters, and the mapping model system can construct the corresponding mapping relationship of each camera in the two-dimensional model by analyzing the overlapping area through the image information collected by the existing cameras.
  • the service system determines 1 camera as the target tracking task sampling camera among the 20 adjacent cameras in the chain image acquisition device, and the attribute of the target tracking task sampling camera is configured as the first camera cluster, and is not configured as the first camera cluster.
  • the other cameras of the first camera cluster are configured as the second camera cluster.
  • the tracking and positioning system includes multiple sets of target detection algorithms, including frame difference method, background subtraction method, expectation maximization method, optical flow method, statistical model method, level set method, etc., which can be switched in real time according to scene requirements.
  • Object detection algorithm The service system pre-stores the facial features of the tracked object, performs identification verification on the tracked positioning target according to a predetermined calculation frequency, and realizes the identity identification matching of the tracked positioning target.
  • the virtual positioning system after receiving the positioning data of each identity mark, extracts the two-dimensional model data of the first camera cluster and the second camera cluster in the mapping model system, obtains the two-dimensional mapping relationship of all the cameras, and through calculation processing, The corresponding virtual positioning data of each camera in the second camera cluster of each identity identifier is obtained.
  • the service system receives the target query instruction generated by the child's parent through the operation of the user terminal, and in response to the target query instruction, matches the identity identifier of the query instruction (that is, matches the facial feature identifier that the parent needs to query the child's target).
  • the service system determines the camera that contains the target within the viewing range (that is, determines that the viewing range includes the camera of the child to be queried).
  • the preferred camera with the target closest to the middle area of the framing range is selected from the camera cluster including the target within the framing range as the framing camera.
  • the frame image information is extracted from the determined preferred viewing camera, converted into a video information stream and sent to the user terminal for display to the user.
  • the service system receives an instruction to switch the display angle of view generated by the parent of the child through the user terminal operation.
  • the switching direction of the viewing angle command switches the camera that includes the target within the viewing range to be the viewing camera; the frame image collected by the viewing camera is converted into a video information stream and sent to the user terminal for display to the operator.
  • FIG. 14 is a schematic diagram of a subway operation company according to an embodiment of the present application.
  • the subway operation company deploys a multi-view mirror-type monitoring and recording system. Multi-angle mirror monitoring record of the tracked target.
  • the chain image acquisition device collects frame image information through different cameras, and then samples the frame image information clusters collected by different cameras through predetermined rules to generate video, which can realize the video monitoring record of the lens moving with the target.
  • the subway operating company can start an uninterrupted mirror-based monitoring and recording thread for each passenger entering the subway station.
  • One subway station ⁇ take the subway train ⁇ get off the subway train and enter the second subway station ⁇ leave the entire whereabouts process of the second subway station.
  • Each mirror-based monitoring and recording thread corresponds to a tracked and recorded target, and multiple mirror-based monitoring and recording threads correspond to multiple tracked and recorded targets respectively.
  • the number of start-up recording threads is determined by the computing power of the service system. Intensive deployment of multiple chain image acquisition devices can realize multi-angle tracking video recording of passengers entering the subway station.
  • a mirror-type monitoring and recording thread can contain images from multiple angles, and corresponding system managers can view it in real time at the same time. And query to tracked targets from multiple angles of the mirror monitoring records.
  • the identification verification device in this embodiment is set at the gate of the subway station, and the identification verification device is The RFID identification of the subway ticket card or the image code identification of the ticket card. Querying the tracking records of a certain target through the service system can only be retrieved by the corresponding identification of the ticket card, and cannot use the target features involving personal privacy, including face recognition, as the identification for retrieval.
  • the distance between each camera in the chain image acquisition device is 0.4 meters, and a 1fps low frame rate lens dynamic tracking video can be performed in the viewing area.
  • the framing lens moves 0.4 meters per second with a mirror-like video presentation effect.
  • the mapping model system analyzes the image information collected at the same time by all the cameras of each chain image acquisition device through the artificial intelligence system, and constructs the corresponding mapping relationship of all the cameras of all chain image acquisition devices in the system in the three-dimensional model. Dynamic models are also preset in the mapping model system, and the dynamic models are different three-dimensional models corresponding to different conditions.
  • the mapping relationship between the camera clusters in the subway station and the camera clusters in different subway trains, as well as the mapping relationship between the camera clusters in the subway train and different subway station camera clusters, are selected after the preconditions for the subway routine car to stop at the subway station are determined. .
  • the chain image acquisition devices are densely distributed, and the chain image acquisition devices in opposite viewing directions are relatively parallel distributed.
  • the service system selects the sampling camera for the target tracking task through artificial intelligence analysis, and the attributes of the target tracking task sampling camera are configured as: The first camera cluster, and other cameras not configured as the first camera cluster are configured as the second camera cluster.
  • the sampling camera to determine the target task through artificial intelligence analysis the target can be tracked and sampled with the least number of cameras, reducing the calculation amount of the system to perform the target tracking task and reducing the system load.
  • the tracking and positioning system uses the deep neural network to track and locate each target in each place, and generates the positioning data of each target.
  • the virtual positioning system extracts the dynamic model data of the first camera cluster and the second camera cluster in the mapping model system, and after the subway train stops at the platform, updates the three-dimensional mapping relationship of all cameras under the currently determined conditions.
  • the virtual positioning system obtains the positioning data of each target generated by the tracking and positioning system, and combines the three-dimensional mapping relationship to obtain the corresponding virtual positioning data of each target in each camera in the second camera cluster through calculation processing.
  • the service system determines the tracked target and the camera that contains the target within the viewing range, and selects the best camera for each image acquisition chain. Viewfinder camera.
  • the service system obtains the frame image information of the best view camera, converts it into a video information stream and sends it to the storage system.
  • the storage system stores the received video information stream for serving the retrieval query of system administrators.
  • the target tracking task if the tracking target is lost or the target determination accuracy is lower than the threshold, the target tracking task starts to initiate an assisting positioning request.
  • the service system determines, according to the virtual positioning data, a camera in the second camera cluster that meets the predetermined condition as a positioning calibration camera, obtains the frame image cache information of the positioning calibration camera, and obtains the virtual positioning data and frame image of the positioning calibration camera according to the positioning calibration camera.
  • the cached information is used for target tracking and positioning analysis, and the positioning data of the target in the positioning calibration camera is generated.
  • the service system corrects the target positioning data according to the positioning data of the target in the positioning calibration camera, and realizes the recovery of target tracking.
  • the assisted positioning request may also be a request plan task of the target tracking task, and the target tracking task is configured with a plan for regularly starting the target tracking auxiliary request, and dynamically extracts image information of different camera frames to improve the accuracy of the tracking result.
  • FIG. 15 is a schematic diagram of the virtual positioning device according to the embodiment of the present application.
  • the virtual positioning device includes: an obtaining unit 1501, The sending unit 1503 and the virtual positioning unit 1505.
  • the virtual positioning device will be described below.
  • the obtaining unit 1501 is configured to obtain target positioning data of a target in a first camera cluster, wherein the first camera cluster is a camera used in a positioning system, and the positioning system is used to perform a target tracking task to generate positioning data.
  • the sending unit 1503 is configured to send the target positioning data to the positioning system, wherein the positioning system uses the target mapping model and the target positioning data to generate virtual positioning data of the target in the second camera cluster, and the target mapping model is used to describe the first camera cluster The mapping relationship with the second camera cluster in space.
  • the virtual positioning unit 1505 is configured to perform virtual positioning on the target according to the virtual positioning data.
  • the above obtaining unit 1501, sending unit 1503 and virtual positioning unit 1505 correspond to steps S102 to S106 in Embodiment 1, and the examples and application scenarios implemented by the above units and corresponding steps are the same, but not limited to The content disclosed in Example 1 above. It should be noted that the above-mentioned units may be executed in a computer system such as a set of computer-executable instructions as part of an apparatus.
  • the target positioning data of the target in the first camera cluster can be obtained by using the acquisition unit, wherein the first camera cluster is the camera used in the positioning system, and the positioning system is used for Perform the target tracking task to generate positioning data; then use the sending unit to send the target positioning data to the positioning system, wherein the positioning system uses the target mapping model and the target positioning data to generate virtual positioning data of the target in the second camera cluster, and the target mapping model It is used to describe the mapping relationship between the first camera cluster and the second camera cluster in space; and the virtual positioning unit is used to perform virtual positioning on the target according to the virtual positioning data.
  • the virtual positioning device provided by the embodiment of the present application achieves the purpose of locating the target through the camera cluster, achieves the technical effect of improving the accuracy of locating the target, and solves the problem that the application of the image sensor in the related art is relatively simple and cannot be Technical issues of large-scale collaborative tracking using image sensors.
  • a virtual positioning device including: an obtaining unit configured to obtain target positioning data of a target in a first camera cluster, wherein the first camera cluster is used in the positioning system For the camera used, the positioning system is used to perform the target tracking task to generate positioning data; the sending unit is configured to send the target positioning data to the positioning system, wherein the positioning system uses the target mapping model and the target positioning data to generate the target in the second.
  • the virtual positioning data in the camera cluster, the target mapping model is used to describe the spatial mapping relationship between the first camera cluster and the second camera cluster; the virtual positioning unit is configured to perform virtual positioning on the target according to the virtual positioning data.
  • the target mapping model is a mapping model obtained by the positioning system from the mapping model system, wherein the mapping model system generates a mapping model including the relationship between the positions of each camera and the viewing angle in the predetermined application scene by means of initializing modeling the mapping model.
  • the virtual positioning device further includes: a verification unit, configured to verify the virtual positioning data before performing virtual positioning on the target according to the virtual positioning data; wherein the verification unit includes : a first acquisition module, configured to acquire the actual positioning data of the target in the second camera cluster; a verification module, configured to use a predetermined verification rule to determine the similarity between the actual positioning data and the virtual positioning data, so as to compare the virtual positioning data with the virtual positioning data. The consistency before the actual positioning data is checked.
  • a verification unit configured to verify the virtual positioning data before performing virtual positioning on the target according to the virtual positioning data
  • the verification unit includes : a first acquisition module, configured to acquire the actual positioning data of the target in the second camera cluster; a verification module, configured to use a predetermined verification rule to determine the similarity between the actual positioning data and the virtual positioning data, so as to compare the virtual positioning data with the virtual positioning data. The consistency before the actual positioning data is checked.
  • the virtual positioning device further includes: a first determining unit, configured to respond to the relay tracking request signal and determine the best sampling camera in the second camera cluster according to the virtual positioning data; a setting unit , set to set the attribute of the best sampling camera as the first camera cluster, and obtain the updated first camera cluster; the acquisition unit, set to obtain the image information stream collected by the updated first camera cluster; the sending unit, set to The image information stream is sent to the positioning system, wherein the positioning system generates target positioning data of the target in the updated first camera cluster based on the image information stream.
  • a first determining unit configured to respond to the relay tracking request signal and determine the best sampling camera in the second camera cluster according to the virtual positioning data
  • a setting unit set to set the attribute of the best sampling camera as the first camera cluster, and obtain the updated first camera cluster
  • the acquisition unit set to obtain the image information stream collected by the updated first camera cluster
  • the sending unit set to The image information stream is sent to the positioning system, wherein the positioning system generates target positioning data of the target in the updated first camera cluster
  • the first determination unit includes: a first determination module configured to determine the distribution density of cameras in the predetermined application scenarios where the first camera cluster and the second camera cluster are located; the second determination module configured to determine In order to determine that when the distribution density is less than a predetermined value, when the target is located at a predetermined position on the edge of the viewing area of the first camera cluster, in response to the relay tracking request signal; or, the third determining module is set to determine that the distribution density is not less than the predetermined value.
  • the target leaves the middle position of the viewing area of the first camera cluster, it responds to the relay tracking request signal.
  • the first determination unit includes: a fourth determination module configured to determine that the positioning accuracy of the target is lower than a predetermined threshold; and a response module configured to respond to a relay tracking request signal.
  • the virtual positioning device further includes: a second determining unit, configured to, after virtual positioning of the target according to the virtual positioning data, in response to the identification matching task, determine that the second camera cluster contains at least one camera of the target; an acquiring unit configured to acquire frame image information of the at least one camera; and a first generating unit configured to recognize feature identifiers in the frame image information, and generate target identifier information of the target based on the feature identifiers.
  • a second determining unit configured to, after virtual positioning of the target according to the virtual positioning data, in response to the identification matching task, determine that the second camera cluster contains at least one camera of the target
  • an acquiring unit configured to acquire frame image information of the at least one camera
  • a first generating unit configured to recognize feature identifiers in the frame image information, and generate target identifier information of the target based on the feature identifiers.
  • the virtual positioning unit includes: a second obtaining module, configured to respond to a target query instruction of the terminal device, and obtain target identification information of the target query command; a fifth determining module, configured to be based on The target identification information is combined with target positioning data and/or virtual positioning data to determine the viewfinder camera; the feedback module is configured to feed back the video stream of the target collected by the viewfinder camera to the terminal device.
  • the virtual positioning device further includes: a third determining unit, configured to determine the second camera according to the virtual positioning data in response to the tracking task request before performing virtual positioning on the target according to the virtual positioning data a positioning calibration camera in the cluster; an acquiring unit, set to obtain frame image cache information of the positioning calibration camera; a second generating unit, set to generate calibration positioning data of the target locating and calibrating the camera based on the frame image cache information; a correction unit, set to Correct the target positioning data according to the calibration positioning data.
  • a third determining unit configured to determine the second camera according to the virtual positioning data in response to the tracking task request before performing virtual positioning on the target according to the virtual positioning data a positioning calibration camera in the cluster
  • an acquiring unit set to obtain frame image cache information of the positioning calibration camera
  • a second generating unit set to generate calibration positioning data of the target locating and calibrating the camera based on the frame image cache information
  • a correction unit set to Correct the target positioning data according to the calibration positioning data.
  • the starting condition of the tracking task request includes one of the following: the target is lost, the determination accuracy of the target is lower than a predetermined threshold, and the predetermined plan of the target tracking task.
  • the virtual positioning device further includes: a fourth determining unit, configured to determine a viewfinder camera according to a predetermined rule according to the target positioning data and/or the virtual positioning data; a sending unit configured to determine the viewfinder camera according to a predetermined rule; The captured frame image information is sent to a predetermined storage medium.
  • a computer-readable storage medium includes a stored computer program, wherein when the computer program is run by the processor, the device where the computer storage medium is located is controlled to execute the above-mentioned The virtual positioning method of any one.
  • a processor is also provided, where the processor is configured to run a computer program, wherein when the computer program runs, any one of the above virtual positioning methods is executed.
  • a virtual positioning system is also provided, and the virtual positioning system uses any one of the above virtual positioning methods.
  • the disclosed technical content can be implemented in other ways.
  • the device embodiments described above are only illustrative, for example, the division of the units may be a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components may be combined or Integration into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes .

Abstract

Disclosed are a virtual positioning method and apparatus, and a virtual positioning system. The method comprises: acquiring target positioning data of a target in a first camera cluster, wherein the first camera cluster is a camera which is used in a positioning system, and the positioning system is used for executing a target tracking task in order to generate positioning data; sending the target positioning data to the positioning system, wherein the positioning system generates virtual positioning data of the target in a second camera cluster by using a target mapping model and the target positioning data, and the target mapping model is used for describing a mapping relationship between the first camera cluster and the second camera cluster in a space; and performing virtual positioning on the target according to the virtual positioning data. By means of the present application, the technical problems in the relevant art of the application of an image sensor being relatively simple, and it not being possible to perform large-scale collaborative tracking by using the image sensor are solved.

Description

虚拟定位方法及装置、虚拟定位系统Virtual positioning method and device, virtual positioning system
本申请要求于2020年12月21日提交中国专利局、申请号为202011522213.2、申请名称“虚拟定位方法及装置、虚拟定位系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202011522213.2 and the application title "Virtual Positioning Method and Device, Virtual Positioning System" filed with the China Patent Office on December 21, 2020, the entire contents of which are incorporated into this application by reference middle.
技术领域technical field
本申请涉及计算机技术领域,具体而言,涉及一种虚拟定位方法及装置、虚拟定位系统。The present application relates to the field of computer technology, and in particular, to a virtual positioning method and device, and a virtual positioning system.
背景技术Background technique
随着技术的指数级发展,图像传感器生产成本会逐年下降,生成规模也会呈现指数级的增长,同时也会催生出各种新型应用,比如链式图像传感器技术。随着图像传感器广泛应用在社会场景的同时,也需要有一种多个图像传感器协同跟踪的技术来满足用户的目标跟踪需求。With the exponential development of technology, the production cost of image sensors will decrease year by year, and the production scale will also increase exponentially. At the same time, various new applications will be born, such as chain image sensor technology. As image sensors are widely used in social scenes, a technology for coordinated tracking of multiple image sensors is also required to meet the user's target tracking needs.
目前,使用图像传感器来进行定位和跟踪的技术应用已经非常广泛,比如一些优秀的跟踪系统PTAM(Parallel Tracking and Mapping)、ACTS(Automatic Camera Tracking System)等。不同的定位和跟踪技术有不同的优越性和成熟度,但是这些技术都是应用于单一监控摄像头下的识别定位和跟踪。At present, the use of image sensors for positioning and tracking technology has been widely used, such as some excellent tracking systems PTAM (Parallel Tracking and Mapping), ACTS (Automatic Camera Tracking System) and so on. Different positioning and tracking technologies have different advantages and maturity, but these technologies are all applied to identification, positioning and tracking under a single surveillance camera.
目前,缺乏一种图像传感器大规模协同的跟踪技术。Currently, there is a lack of a large-scale coordinated tracking technology for image sensors.
针对上述相关技术中图像传感器应用比较单一,无法利用图像传感器进行大规模的协同跟踪的问题,目前尚未提出有效的解决方案。Aiming at the problem that the application of the image sensor in the above-mentioned related art is relatively simple, and the image sensor cannot be used for large-scale collaborative tracking, no effective solution has been proposed yet.
申请内容Application content
本申请实施例提供了一种虚拟定位方法及装置、虚拟定位系统,以至少解决相关技术中图像传感器应用比较单一,无法利用图像传感器进行大规模的协同跟踪的技术问题。Embodiments of the present application provide a virtual positioning method and device, and a virtual positioning system, so as to at least solve the technical problem in the related art that the application of image sensors is relatively simple and large-scale collaborative tracking cannot be performed by using image sensors.
根据本申请实施例的一个方面,提供了一种虚拟定位方法,包括:获取目标在第一摄像头集群中的目标定位数据,其中,所述第一摄像头集群是定位系统中用于采用 的摄像头,所述定位系统被用于执行目标跟踪任务以生成定位数据;将所述目标定位数据发送至所述定位系统,其中,所述定位系统利用目标映射模型和所述目标定位数据生成所述目标在第二摄像头集群中的虚拟定位数据,所述目标映射模型用于描述所述第一摄像头集群与所述第二摄像头集群在空间上的映射关系;根据所述虚拟定位数据对所述目标进行虚拟定位。According to an aspect of the embodiments of the present application, a virtual positioning method is provided, including: acquiring target positioning data of a target in a first camera cluster, where the first camera cluster is a camera used in a positioning system, The positioning system is used to perform a target tracking task to generate positioning data; the target positioning data is sent to the positioning system, wherein the positioning system utilizes the target mapping model and the target positioning data to generate the target at Virtual positioning data in the second camera cluster, the target mapping model is used to describe the spatial mapping relationship between the first camera cluster and the second camera cluster; virtual positioning of the target is performed according to the virtual positioning data position.
可选地,所述目标映射模型为所述定位系统从映射模型系统获取的映射模型,其中,所述映射模型系统通过初始化建模的方式生成包含有预定应用场景内各摄像头位置和取景角度关系的映射模型。Optionally, the target mapping model is a mapping model obtained by the positioning system from a mapping model system, wherein the mapping model system generates a relationship between the positions of the cameras and the viewing angle in the predetermined application scene by initializing the modeling. the mapping model.
可选地,在根据所述虚拟定位数据对所述目标进行虚拟定位之前,该虚拟定位方法还包括:对所述虚拟定位数据进行校验;其中,对所述虚拟定位数据进行校验,包括:获取所述目标在所述第二摄像头集群中的实际定位数据;利用预定校验规则确定所述实际定位数据与所述虚拟定位数据的相似度,以对所述虚拟定位数据与所述实际定位数据之前的一致性进行校验。Optionally, before performing virtual positioning on the target according to the virtual positioning data, the virtual positioning method further includes: verifying the virtual positioning data; wherein, verifying the virtual positioning data includes: : obtain the actual positioning data of the target in the second camera cluster; determine the similarity between the actual positioning data and the virtual positioning data by using a predetermined verification rule, so as to compare the virtual positioning data with the actual positioning data. Check the consistency before positioning the data.
可选地,该虚拟定位方法还包括:响应于接力跟踪请求信号,并根据所述虚拟定位数据确定所述第二摄像头集群中的最佳采样摄像头;将所述最佳采样摄像头的属性设置为第一摄像头集群,得到更新后的第一摄像头集群;获取所述更新后的第一摄像头集群采集的图像信息流;将所述图像信息流发送至所述定位系统,其中,所述定位系统基于所述图像信息流生成所述目标在所述更新后的第一摄像头集群中的目标定位数据。Optionally, the virtual positioning method further includes: in response to a relay tracking request signal, and determining the best sampling camera in the second camera cluster according to the virtual positioning data; setting the attribute of the best sampling camera as obtaining the updated first camera cluster from the first camera cluster; acquiring the image information stream collected by the updated first camera cluster; sending the image information stream to the positioning system, wherein the positioning system is based on The image information stream generates target positioning data for the target in the updated first camera cluster.
可选地,响应于接力跟踪请求信号,包括:确定所述第一摄像头集群和所述第二摄像头集群所在预定应用场景中摄像头的分布密度;确定所述分布密度小于预定数值时,在所述目标位于所述第一摄像头集群取景区域的边缘预定位置时,响应于所述接力跟踪请求信号;或,确定所述分布密度不小于预定数值时,在所述目标离开所述第一摄像头集群取景区域的中间位置时,响应于所述接力跟踪请求信号。Optionally, in response to the relay tracking request signal, the method includes: determining the distribution density of cameras in a predetermined application scenario where the first camera cluster and the second camera cluster are located; when it is determined that the distribution density is less than a predetermined value, in the When the target is located at a predetermined position on the edge of the viewing area of the first camera cluster, in response to the relay tracking request signal; or, when it is determined that the distribution density is not less than a predetermined value, when the target leaves the first camera cluster for viewing in the middle of the zone, in response to the relay tracking request signal.
可选地,响应于接力跟踪请求信号,包括:确定所述目标的定位准确度低于预定阈值;响应于所述接力跟踪请求信号。Optionally, responding to the relay tracking request signal includes: determining that the positioning accuracy of the target is lower than a predetermined threshold; and responding to the relay tracking request signal.
可选地,在根据所述虚拟定位数据对所述目标进行虚拟定位之后,该虚拟定位方法还包括:响应于标识匹配任务,确定所述第二摄像头集群中包含有所述目标的至少一个摄像头;获取所述至少一个摄像头的帧图像信息;识别所述帧图像信息中的特征标识,并基于所述特征标识生成所述目标的目标标识信息。Optionally, after performing virtual positioning on the target according to the virtual positioning data, the virtual positioning method further includes: in response to an identification matching task, determining that the second camera cluster includes at least one camera of the target. obtaining frame image information of the at least one camera; identifying a feature identifier in the frame image information, and generating target identifier information of the target based on the feature identifier.
可选地,根据所述虚拟定位数据对所述目标进行虚拟定位,包括:响应于终端设 备的目标查询指令,并获取所述目标查询指令的目标标识信息;基于所述目标标识信息结合所述目标定位数据和/或所述虚拟定位数据,确定取景摄像头;将所述取景摄像头采集的所述目标的视频流反馈至所述终端设备。Optionally, performing virtual positioning on the target according to the virtual positioning data includes: responding to a target query instruction of a terminal device, and acquiring target identification information of the target query instruction; combining the target identification information based on the target identification information The target positioning data and/or the virtual positioning data are used to determine a viewfinder camera; and the video stream of the target collected by the viewfinder camera is fed back to the terminal device.
可选地,在根据所述虚拟定位数据对所述目标进行虚拟定位之前,该虚拟定位方法还包括:响应于跟踪任务请求,根据所述虚拟定位数据确定所述第二摄像头集群中的定位校准摄像头;获取所述定位校准摄像头的帧图像缓存信息;基于所述帧图像缓存信息生成所述目标在所述定位校准摄像头的校准定位数据;根据所述校准定位数据修正所述目标定位数据。Optionally, before performing virtual positioning on the target according to the virtual positioning data, the virtual positioning method further includes: in response to a tracking task request, determining a positioning calibration in the second camera cluster according to the virtual positioning data camera; acquiring frame image buffer information of the positioning calibration camera; generating calibration positioning data of the target in the positioning calibration camera based on the frame image buffer information; correcting the target positioning data according to the calibration positioning data.
可选地,所述跟踪任务请求的启动条件包括以下之一:所述目标丢失、所述目标的判定准确度低于预定阈值、所述目标跟踪任务的预定计划。Optionally, the start condition of the tracking task request includes one of the following: the target is lost, the determination accuracy of the target is lower than a predetermined threshold, and the predetermined plan of the target tracking task.
可选地,该虚拟定位方法还包括:根据所述目标定位数据和/或所述虚拟定位数据,按照预定规则确定取景摄像头;将所述取景摄像头采集的帧图像信息发送至预定存储介质。Optionally, the virtual positioning method further includes: determining a viewfinder camera according to a predetermined rule according to the target positioning data and/or the virtual positioning data; and sending frame image information collected by the viewfinder camera to a predetermined storage medium.
根据本申请实施例的另外一个方面,还提供了一种虚拟定位装置,包括:获取单元,设置为获取目标在第一摄像头集群中的目标定位数据,其中,所述第一摄像头集群是定位系统中用于采用的摄像头,所述定位系统被用于执行目标跟踪任务以生成定位数据;发送单元,设置为将所述目标定位数据发送至所述定位系统,其中,所述定位系统利用目标映射模型和所述目标定位数据生成所述目标在第二摄像头集群中的虚拟定位数据,所述目标映射模型用于描述所述第一摄像头集群与所述第二摄像头集群在空间上的映射关系;虚拟定位单元,设置为根据所述虚拟定位数据对所述目标进行虚拟定位。According to another aspect of the embodiments of the present application, a virtual positioning device is also provided, including: an obtaining unit configured to obtain target positioning data of a target in a first camera cluster, wherein the first camera cluster is a positioning system The camera used in the positioning system is used to perform a target tracking task to generate positioning data; a sending unit is configured to send the target positioning data to the positioning system, wherein the positioning system utilizes target mapping The model and the target positioning data generate virtual positioning data of the target in the second camera cluster, and the target mapping model is used to describe the spatial mapping relationship between the first camera cluster and the second camera cluster; A virtual positioning unit, configured to perform virtual positioning on the target according to the virtual positioning data.
可选地,所述目标映射模型为所述定位系统从映射模型系统获取的映射模型,其中,所述映射模型系统通过初始化建模的方式生成包含有预定应用场景内各摄像头位置和取景角度关系的映射模型。Optionally, the target mapping model is a mapping model obtained by the positioning system from a mapping model system, wherein the mapping model system generates a relationship between the positions of the cameras and the viewing angle in the predetermined application scene by initializing the modeling. the mapping model.
可选地,该虚拟定位装置还包括:校验单元,设置为在根据所述虚拟定位数据对所述目标进行虚拟定位之前,对所述虚拟定位数据进行校验;其中,所述校验单元,包括:第一获取模块,设置为获取所述目标在所述第二摄像头集群中的实际定位数据;校验模块,设置为利用预定校验规则确定所述实际定位数据与所述虚拟定位数据的相似度,以对所述虚拟定位数据与所述实际定位数据之前的一致性进行校验。Optionally, the virtual positioning device further includes: a verification unit, configured to verify the virtual positioning data before performing virtual positioning on the target according to the virtual positioning data; wherein, the verification unit , comprising: a first acquisition module, configured to acquire the actual positioning data of the target in the second camera cluster; a verification module, configured to use a predetermined verification rule to determine the actual positioning data and the virtual positioning data to verify the consistency between the virtual positioning data and the actual positioning data.
可选地,该虚拟定位装置还包括:第一确定单元,设置为响应于接力跟踪请求信号,并根据所述虚拟定位数据确定所述第二摄像头集群中的最佳采样摄像头;设置单 元,设置为将所述最佳采样摄像头的属性设置为第一摄像头集群,得到更新后的第一摄像头集群;所述获取单元,设置为获取所述更新后的第一摄像头集群采集的图像信息流;所述发送单元,设置为将所述图像信息流发送至所述定位系统,其中,所述定位系统基于所述图像信息流生成所述目标在所述更新后的第一摄像头集群中的目标定位数据。Optionally, the virtual positioning device further includes: a first determining unit configured to respond to a relay tracking request signal and determine the best sampling camera in the second camera cluster according to the virtual positioning data; a setting unit configured to set In order to set the attribute of the best sampling camera as the first camera cluster, the updated first camera cluster is obtained; the acquiring unit is configured to acquire the image information stream collected by the updated first camera cluster; The sending unit is configured to send the image information stream to the positioning system, wherein the positioning system generates target positioning data of the target in the updated first camera cluster based on the image information stream .
可选地,所述第一确定单元,包括:第一确定模块,设置为确定所述第一摄像头集群和所述第二摄像头集群所在预定应用场景中摄像头的分布密度;第二确定模块,设置为确定所述分布密度小于预定数值时,在所述目标位于所述第一摄像头集群取景区域的边缘预定位置时,响应于所述接力跟踪请求信号;或,第三确定模块,设置为确定所述分布密度不小于预定数值时,在所述目标离开所述第一摄像头集群取景区域的中间位置时,响应于所述接力跟踪请求信号。Optionally, the first determination unit includes: a first determination module, configured to determine the distribution density of cameras in the predetermined application scenarios where the first camera cluster and the second camera cluster are located; a second determination module, configured to determine In order to determine that the distribution density is less than a predetermined value, when the target is located at a predetermined position on the edge of the viewing area of the first camera cluster, in response to the relay tracking request signal; or, the third determining module is configured to determine the When the distribution density is not less than a predetermined value, when the target leaves the middle position of the viewing area of the first camera cluster, it responds to the relay tracking request signal.
可选地,所述第一确定单元,包括:第四确定模块,设置为确定所述目标的定位准确度低于预定阈值;响应模块,设置为响应于所述接力跟踪请求信号。Optionally, the first determination unit includes: a fourth determination module configured to determine that the positioning accuracy of the target is lower than a predetermined threshold; and a response module configured to respond to the relay tracking request signal.
可选地,该虚拟定位装置还包括:第二确定单元,设置为在根据所述虚拟定位数据对所述目标进行虚拟定位之后,响应于标识匹配任务,确定所述第二摄像头集群中包含有所述目标的至少一个摄像头;所述获取单元,设置为获取所述至少一个摄像头的帧图像信息;第一生成单元,设置为识别所述帧图像信息中的特征标识,并基于所述特征标识生成所述目标的目标标识信息。Optionally, the virtual positioning device further includes: a second determining unit, configured to, after virtual positioning of the target according to the virtual positioning data, in response to an identification matching task, determine that the second camera cluster contains at least one camera of the target; the acquiring unit is configured to acquire the frame image information of the at least one camera; the first generating unit is configured to recognize the feature identifier in the frame image information, and based on the feature identifier Target identification information for the target is generated.
可选地,所述虚拟定位单元,包括:第二获取模块,设置为响应于终端设备的目标查询指令,并获取所述目标查询指令的目标标识信息;第五确定模块,设置为基于所述目标标识信息结合所述目标定位数据和/或所述虚拟定位数据,确定取景摄像头;反馈模块,设置为将所述取景摄像头采集的所述目标的视频流反馈至所述终端设备。Optionally, the virtual positioning unit includes: a second acquisition module, configured to respond to a target query instruction of a terminal device, and acquire target identification information of the target query instruction; a fifth determination module, set to be based on the The target identification information is combined with the target positioning data and/or the virtual positioning data to determine a viewfinder camera; the feedback module is configured to feed back the video stream of the target collected by the viewfinder camera to the terminal device.
可选地,该虚拟定位装置还包括:第三确定单元,设置为在根据所述虚拟定位数据对所述目标进行虚拟定位之前,响应于跟踪任务请求,根据所述虚拟定位数据确定所述第二摄像头集群中的定位校准摄像头;所述获取单元,设置为获取所述定位校准摄像头的帧图像缓存信息;第二生成单元,设置为基于所述帧图像缓存信息生成所述目标在所述定位校准摄像头的校准定位数据;修正单元,设置为根据所述校准定位数据修正所述目标定位数据。Optionally, the virtual positioning device further includes: a third determining unit, configured to, in response to a tracking task request, determine the third determining unit according to the virtual positioning data before performing virtual positioning on the target according to the virtual positioning data. A positioning calibration camera in a two-camera cluster; the obtaining unit is configured to obtain the frame image cache information of the positioning calibration camera; the second generating unit is configured to generate the target at the positioning based on the frame image cache information The calibration positioning data of the camera is calibrated; the correction unit is configured to correct the target positioning data according to the calibration positioning data.
可选地,所述跟踪任务请求的启动条件包括以下之一:所述目标丢失、所述目标的判定准确度低于预定阈值、所述目标跟踪任务的预定计划。Optionally, the start condition of the tracking task request includes one of the following: the target is lost, the determination accuracy of the target is lower than a predetermined threshold, and the predetermined plan of the target tracking task.
可选地,该虚拟定位装置还包括:第四确定单元,设置为根据所述目标定位数据 和/或所述虚拟定位数据,按照预定规则确定取景摄像头;所述发送单元,设置为将所述取景摄像头采集的帧图像信息发送至预定存储介质。Optionally, the virtual positioning device further includes: a fourth determining unit, configured to determine a viewfinder camera according to a predetermined rule according to the target positioning data and/or the virtual positioning data; the sending unit, configured to The frame image information collected by the viewfinder camera is sent to a predetermined storage medium.
根据本申请实施例的另外一个方面,还提供了一种虚拟定位系统,使用上述中任一项所述的虚拟定位方法。According to another aspect of the embodiments of the present application, a virtual positioning system is also provided, using any one of the virtual positioning methods described above.
根据本申请实施例的另外一个方面,还提供了一种计算机可读存储介质,所述计算机可读存储介质包括存储的计算机程序,其中,在所述计算机程序被处理器运行时控制所述计算机存储介质所在设备执行上述中任一项所述的虚拟定位方法。According to another aspect of the embodiments of the present application, a computer-readable storage medium is also provided, the computer-readable storage medium includes a stored computer program, wherein when the computer program is run by a processor, the computer is controlled The device where the storage medium is located executes the virtual positioning method described in any one of the above.
根据本申请实施例的另外一个方面,还提供了一种处理器,所述处理器用于运行计算机程序,其中,所述计算机程序运行时执行上述中任一项所述的虚拟定位方法。According to another aspect of the embodiments of the present application, a processor is also provided, where the processor is configured to run a computer program, wherein when the computer program runs, the virtual positioning method described in any one of the above is executed.
在本申请实施例中,采用获取目标在第一摄像头集群中的目标定位数据,其中,第一摄像头集群是定位系统中用于采用的摄像头,定位系统被用于执行目标跟踪任务以生成定位数据;将目标定位数据发送至定位系统,其中,定位系统利用目标映射模型和目标定位数据生成目标在第二摄像头集群中的虚拟定位数据,目标映射模型用于描述第一摄像头集群与第二摄像头集群在空间上的映射关系;根据虚拟定位数据对目标进行虚拟定位,通过本申请实施例提供的虚拟定位方法,实现了通过摄像头集群协同对目标进行定位的目的,达到了提高对目标进行定位的精确度的技术效果,进而解决了相关技术中图像传感器应用比较单一,无法利用图像传感器进行大规模的协同跟踪的技术问题。In the embodiment of the present application, the target positioning data of the target in the first camera cluster is obtained, wherein the first camera cluster is the camera used in the positioning system, and the positioning system is used to perform the target tracking task to generate positioning data. Send the target positioning data to the positioning system, wherein the positioning system uses the target mapping model and the target positioning data to generate virtual positioning data of the target in the second camera cluster, and the target mapping model is used to describe the first camera cluster and the second camera cluster The mapping relationship in space; the virtual positioning of the target is carried out according to the virtual positioning data, and the virtual positioning method provided by the embodiment of the present application realizes the purpose of positioning the target through the coordination of the camera cluster, and improves the accuracy of positioning the target. The technical effect of high degree is solved, and the technical problem that the application of the image sensor in the related art is relatively single, and the large-scale collaborative tracking cannot be carried out by using the image sensor.
附图说明Description of drawings
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:The drawings described herein are used to provide further understanding of the present application and constitute a part of the present application. The schematic embodiments and descriptions of the present application are used to explain the present application and do not constitute an improper limitation of the present application. In the attached image:
图1是根据本申请实施例的虚拟定位方法的流程图;1 is a flowchart of a virtual positioning method according to an embodiment of the present application;
图2是根据本申请实施例的虚拟定位方法的时序图一;2 is a sequence diagram 1 of a virtual positioning method according to an embodiment of the present application;
图3是根据本申请实施例的虚拟定位方法的时序图二;3 is a second sequence diagram of a virtual positioning method according to an embodiment of the present application;
图4是根据本申请实施例的虚拟定位方法的时序图三;4 is a sequence diagram 3 of a virtual positioning method according to an embodiment of the present application;
图5是根据本申请实施例的虚拟定位方法的时序图四;5 is a sequence diagram 4 of a virtual positioning method according to an embodiment of the present application;
图6是根据本申请实施例的虚拟定位方法的时序图五;6 is a sequence diagram 5 of a virtual positioning method according to an embodiment of the present application;
图7是根据本申请实施例的虚拟定位方法的时序图六;7 is a sequence diagram 6 of a virtual positioning method according to an embodiment of the present application;
图8是根据本申请实施例的虚拟定位方法的时序图七;8 is a sequence diagram 7 of a virtual positioning method according to an embodiment of the present application;
图9是根据本申请实施例的虚拟定位方法的时序图八;9 is a sequence diagram 8 of a virtual positioning method according to an embodiment of the present application;
图10(a)是根据本申请实施例的景区的示意图一;Figure 10 (a) is a schematic diagram 1 of a scenic spot according to an embodiment of the present application;
图10(b)是根据本申请实施例的景区的示意图二;Figure 10 (b) is a schematic diagram 2 of a scenic spot according to an embodiment of the present application;
图11是根据本申请实施例的物流中转站的示意图;11 is a schematic diagram of a logistics transfer station according to an embodiment of the present application;
图12是根据本申请实施例的交通监控区域的示意图;12 is a schematic diagram of a traffic monitoring area according to an embodiment of the present application;
图13是根据本申请实施例的幼儿园的示意图;13 is a schematic diagram of a kindergarten according to an embodiment of the present application;
图14是根据本申请实施例的地铁运营公司的示意图;14 is a schematic diagram of a subway operating company according to an embodiment of the present application;
图15是根据本申请实施例的虚拟定位装置的示意图。FIG. 15 is a schematic diagram of a virtual positioning apparatus according to an embodiment of the present application.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。In order to make those skilled in the art better understand the solutions of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are only The embodiments are part of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the scope of protection of the present application.
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms "first", "second", etc. in the description and claims of the present application and the above drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or sequence. It is to be understood that data so used may be interchanged under appropriate circumstances so that the embodiments of the application described herein can be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "having" and any variations thereof, are intended to cover non-exclusive inclusion, for example, a process, method, system, product or device comprising a series of steps or units is not necessarily limited to those expressly listed Rather, those steps or units may include other steps or units not expressly listed or inherent to these processes, methods, products or devices.
根据本申请实施例,提供了一种虚拟定位方法的方法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。According to an embodiment of the present application, a method embodiment of a virtual positioning method is provided. It should be noted that the steps shown in the flowchart of the accompanying drawings may be executed in a computer system such as a set of computer-executable instructions, and, Although a logical order is shown in the flowcharts, in some cases steps shown or described may be performed in an order different from that herein.
图1是根据本申请实施例的虚拟定位方法的流程图,如图1所示,该虚拟定位方 法包括如下步骤:Fig. 1 is the flow chart of the virtual positioning method according to the embodiment of the present application, as shown in Fig. 1, this virtual positioning method comprises the following steps:
步骤S102,获取目标在第一摄像头集群中的目标定位数据,其中,第一摄像头集群是定位系统中用于采用的摄像头,定位系统被用于执行目标跟踪任务以生成定位数据。Step S102, acquiring target positioning data of the target in the first camera cluster, where the first camera cluster is a camera used in the positioning system, and the positioning system is used to perform a target tracking task to generate positioning data.
可选的,这里的第一摄像头集群是跟踪定位系统(即,定位系统)的采样摄像头,跟踪定位系统被设置为执行目标跟踪任务生成定位数据。Optionally, the first camera cluster here is a sampling camera of a tracking and positioning system (ie, a positioning system), and the tracking and positioning system is configured to perform a target tracking task to generate positioning data.
步骤S104,将目标定位数据发送至定位系统,其中,定位系统利用目标映射模型和目标定位数据生成目标在第二摄像头集群中的虚拟定位数据,目标映射模型用于描述第一摄像头集群与第二摄像头集群在空间上的映射关系。Step S104, sending the target positioning data to the positioning system, wherein the positioning system uses the target mapping model and the target positioning data to generate virtual positioning data of the target in the second camera cluster, and the target mapping model is used to describe the first camera cluster and the second camera cluster. The mapping relationship of camera clusters in space.
可选的,这里的第二摄像头集群中的摄像头和第一摄像头集群的摄像头是固定式摄像头,摄像头的位置和角度是固定不变的,可以获取背景静止的帧图像信息;只要摄像头的位置和角度是固定不变的,摄像头的变焦不会改变图像采集的透视关系,只会改变取景区域,通过确定焦距变化与取景区域变化的关系,可以确定变焦前后的位置坐标点的对应关系。同理,只要摄像头预置有变焦参数,就可以计算出目标在各种变焦状态时的位置坐标值。Optionally, the cameras in the second camera cluster and the cameras in the first camera cluster are fixed cameras, the position and angle of the cameras are fixed, and frame image information with a still background can be obtained; The angle is fixed, and the zooming of the camera will not change the perspective relationship of image acquisition, but will only change the viewing area. Similarly, as long as the camera is preset with zoom parameters, the position coordinates of the target in various zoom states can be calculated.
另外,在有的摄像头变焦过程中会出现镜头畸变问题,特别是广角摄像头和鱼眼摄像头的畸变明显,可以通过畸变校正技术消除变焦对目标定位的影响。因此,在已知摄像头变焦参数、镜头畸变参数和建模初始化焦距状态的前提下,目标定位数据和虚拟定位数据是可以不受摄像头变焦因素和镜头畸变因素的影响。In addition, the lens distortion problem occurs in the zooming process of some cameras, especially the wide-angle camera and fisheye camera have obvious distortion, and the effect of zooming on the target positioning can be eliminated through the distortion correction technology. Therefore, on the premise that the camera zoom parameters, lens distortion parameters and the modeling initialization focal length state are known, the target positioning data and the virtual positioning data are not affected by the camera zoom factor and the lens distortion factor.
需要说明的是,在本申请实施例中,第二摄像头集群还可以是包含有链式图像获取装置,该链式图像获取装置的多个摄像头在同一数据传输总线中链状分布。It should be noted that, in the embodiment of the present application, the second camera cluster may also include a chain image acquisition device, and a plurality of cameras of the chain image acquisition device are distributed in a chain in the same data transmission bus.
此外,在本申请实施例中,跟踪定位系统可以包含有一套或多套目标跟踪算法,例如,帧差法、背景补偿法、期望最大化法、光流法、统计模型法、水平集方法以及并行追踪与绘制PTAM(Parallel Tracking and Mapping,简称PTAM)、相机自动追踪系统ACTS(Automatic Camera Tracking System,简称ACTS)等等,还可以是包含有深度神经网络,通过机器学习等手段实现对目标的跟踪和定位。In addition, in this embodiment of the present application, the tracking and positioning system may include one or more sets of target tracking algorithms, such as frame difference method, background compensation method, expectation maximization method, optical flow method, statistical model method, level set method and Parallel tracking and drawing PTAM (Parallel Tracking and Mapping, referred to as PTAM), automatic camera tracking system ACTS (Automatic Camera Tracking System, referred to as ACTS), etc., can also include deep neural networks, through machine learning and other means to achieve target detection Track and locate.
步骤S106,根据虚拟定位数据对目标进行虚拟定位。Step S106, perform virtual positioning on the target according to the virtual positioning data.
由上可知,在本申请实施例中,可以获取目标在第一摄像头集群中的目标定位数据,其中,第一摄像头集群是定位系统中用于采用的摄像头,定位系统被用于执行目标跟踪任务以生成定位数据;将目标定位数据发送至定位系统,其中,定位系统利用 目标映射模型和目标定位数据生成目标在第二摄像头集群中的虚拟定位数据,目标映射模型用于描述第一摄像头集群与第二摄像头集群在空间上的映射关系;根据虚拟定位数据对目标进行虚拟定位,实现了通过摄像头集群协同对目标进行定位的目的,达到了提高对目标进行定位的精确度的技术效果。As can be seen from the above, in the embodiment of the present application, the target positioning data of the target in the first camera cluster can be obtained, wherein the first camera cluster is the camera used in the positioning system, and the positioning system is used to perform the target tracking task. to generate positioning data; send the target positioning data to the positioning system, wherein the positioning system uses the target mapping model and the target positioning data to generate virtual positioning data of the target in the second camera cluster, and the target mapping model is used to describe the first camera cluster and The mapping relationship of the second camera cluster in space; the virtual positioning of the target according to the virtual positioning data, the goal of coordinating the positioning of the target through the camera cluster is achieved, and the technical effect of improving the accuracy of the positioning of the target is achieved.
因此,通过本申请实施例提供的虚拟定位方法,解决了相关技术中图像传感器应用比较单一,无法利用图像传感器进行大规模的协同跟踪的技术问题。Therefore, the virtual positioning method provided by the embodiments of the present application solves the technical problem that the application of the image sensor in the related art is relatively simple, and the image sensor cannot be used for large-scale collaborative tracking.
在一种可选的实施例中,目标映射模型为定位系统从映射模型系统获取的映射模型,其中,映射模型系统通过初始化建模的方式生成包含有预定应用场景内各摄像头位置和取景角度关系的映射模型。In an optional embodiment, the target mapping model is a mapping model obtained by the positioning system from the mapping model system, wherein the mapping model system generates a mapping model including the relationship between the positions of each camera and the viewing angle in the predetermined application scene by means of initializing modeling the mapping model.
在该实施例中,摄像头集群映射模型是由映射模型系统通过初始化建模生成的包含有应用场景内各摄像头位置和取景角度关系的空间模型、描述有第一摄像头集群和第二摄像头集群在空间上的相互映射关系。In this embodiment, the camera cluster mapping model is a space model generated by the mapping model system through initialization modeling and includes the relationship between the positions of each camera in the application scene and the viewing angle, and describes the first camera cluster and the second camera cluster in the space. mutual mapping relationship.
需要说明的是,在本申请实施例中,映射模型可以为二维模型、三维模型、动态模型之一;其中,二维模型基本上是通过分析各摄像头采集的初始化建模帧图像信息中的重叠区域生成,二维模型在具体实际应用过程中存在定位准确度低,易受三维环境影响的问题,其只适合于定位精准度要求不高的应用场景中;其中,三维模型可以通过分析各摄像头在建模初始化过程采集的初始化建模帧图像信息中三个以上的共同位置点,建立各摄像头在空间模型中的三维函数映射关系;三维模型还可以通过分析在建模初始化过程中一个目标在不同时间的位置坐标的方法进行建模;由于本申请实施例中的摄像头集群都是固定位置和固定取景角度,各摄像头取景区的背景在不同时间也是相同的,同一目标移动到三个以上的位置,通过分析目标位置在各摄像头取景区内的位置坐标就可以构建出各摄像头在空间模型中的三维函数映射关系;其中,动态模型是不同条件下对应的不同空间模型,可标准化的位置重复对应关系是构建动态模型的前提条件,例如,地铁列车到站后都会精准的停靠在相同的位置上,地铁列车车门内外的摄像头的空间映射关系,是列车精准停靠站台的前提条件来确定;又例如,电梯停靠在各楼层就是位置重复可标准化的场景,构建动态模型后,可以根据不同的电梯停靠条件触发确定不同的三维模型,获得不同的摄像头集群的二维或者三维映射关系。It should be noted that, in the embodiment of the present application, the mapping model may be one of a two-dimensional model, a three-dimensional model, and a dynamic model; wherein, the two-dimensional model is basically obtained by analyzing the image information of the initialization modeling frame collected by each camera. The overlapping area is generated, and the two-dimensional model has the problems of low positioning accuracy and easy to be affected by the three-dimensional environment in the actual application process. It is only suitable for application scenarios where the positioning accuracy is not high; among them, the three-dimensional model can be analyzed by The three or more common position points in the initialized modeling frame image information collected by the camera during the modeling initialization process are used to establish the three-dimensional function mapping relationship of each camera in the space model; the three-dimensional model can also be analyzed by analyzing a target in the modeling initialization process. Modeling is performed using the method of position coordinates at different times; since the camera clusters in the embodiments of the present application are all fixed positions and fixed viewing angles, the background of each camera viewing area is also the same at different times, and the same target moves to more than three The three-dimensional function mapping relationship of each camera in the space model can be constructed by analyzing the position coordinates of the target position in the framing area of each camera; among them, the dynamic model is the different space models corresponding to different conditions, and the position that can be standardized Repeated correspondence is a prerequisite for building a dynamic model. For example, subway trains will accurately stop at the same location when they arrive at the station. The spatial mapping relationship between the cameras inside and outside the doors of subway trains is a prerequisite for the train to accurately park on the platform. For another example, when an elevator stops on each floor, the location can be repeated and standardized. After a dynamic model is constructed, different 3D models can be triggered and determined according to different elevator parking conditions, and the 2D or 3D mapping relationship of different camera clusters can be obtained.
图2是根据本申请实施例的虚拟定位方法的时序图一,如图2所示,第一摄像头集群会将采集到的图像帧信息流发送至服务系统,服务系统将第一摄像头集群的帧图像信息流发送至跟踪定位系统,跟踪定位系统基于帧图像信息流生成定位数据,并将定位数据返回至服务系统,服务系统会将目标在第一摄像头集群的定位数据发送至虚 拟定位系统,虚拟定位系统会向映射模型系统发送映射模型请求、并接收映射模型系统基于映射模型请求反馈的目标映射模型,基于目标映射模型以及目标在第一摄像头集群的定位数据生成目标在第二摄像头集群中的虚拟定位数据,并将虚拟定位数据反馈至服务系统。FIG. 2 is a sequence diagram 1 of a virtual positioning method according to an embodiment of the present application. As shown in FIG. 2 , the first camera cluster will send the collected image frame information stream to the service system, and the service system will send the frames of the first camera cluster to the service system. The image information stream is sent to the tracking and positioning system. The tracking and positioning system generates positioning data based on the frame image information flow, and returns the positioning data to the service system. The service system will send the positioning data of the target in the first camera cluster to the virtual positioning system. The positioning system will send a mapping model request to the mapping model system, and receive the target mapping model that the mapping model system requests feedback based on the mapping model. Virtual positioning data, and feedback the virtual positioning data to the service system.
即,在该实施例中,可以通过获取目标在第一摄像头集群中的定位数据,提取摄像头集群映射模型;并基于映射模型和目标定位数据生成目标在第二摄像头集群中对应的虚拟定位数据。That is, in this embodiment, the camera cluster mapping model can be extracted by acquiring the positioning data of the target in the first camera cluster; and virtual positioning data corresponding to the target in the second camera cluster can be generated based on the mapping model and the target positioning data.
在一种可选的实施例中,在根据虚拟定位数据对目标进行虚拟定位之前,该虚拟定位方法还可以包括:对虚拟定位数据进行校验;其中,对虚拟定位数据进行校验,包括:获取目标在第二摄像头集群中的实际定位数据;利用预定校验规则确定实际定位数据与虚拟定位数据的相似度,以对虚拟定位数据与实际定位数据之前的一致性进行校验。In an optional embodiment, before performing virtual positioning on the target according to the virtual positioning data, the virtual positioning method may further include: verifying the virtual positioning data; wherein, verifying the virtual positioning data includes: Obtain the actual positioning data of the target in the second camera cluster; determine the similarity between the actual positioning data and the virtual positioning data by using a predetermined verification rule, so as to verify the consistency between the virtual positioning data and the actual positioning data.
在该实施例中,第二摄像头集群中的定位数据是第二跟踪系统生成的目标定位数据,第二目标定位系统被设置为对第二摄像头集群进行采样、执行目标跟踪生成目标定位数据。In this embodiment, the positioning data in the second camera cluster is target positioning data generated by a second tracking system, and the second target positioning system is configured to sample the second camera cluster and perform target tracking to generate target positioning data.
需要说明的是,第二跟踪定位系统和第二摄像头集群可以附属于服务系统外部的监控服务系统,两套服务系统只有在初始化建模和目标校验时进行有限的数据交换。外部的摄像头集群在本地服务系统中不真实存在,只是映射模型中的虚拟映射的摄像头。通过校验虚拟定位数据和真实定位数据之间的一致性,可以检验目标跟踪的真实有效性,实现对目标跟踪任务进行严谨的过程控制。It should be noted that the second tracking and positioning system and the second camera cluster can be attached to a monitoring service system outside the service system, and the two service systems only perform limited data exchange during initial modeling and target verification. The external camera cluster does not really exist in the local service system, but is just a virtual mapping camera in the mapping model. By verifying the consistency between the virtual positioning data and the real positioning data, the authenticity and effectiveness of target tracking can be verified, and strict process control of the target tracking task can be realized.
图3是根据本申请实施例的虚拟定位方法的时序图二,如图3所示,该虚拟定位方法除了包括图2中的虚拟定位方式外,还包括对虚拟定位数据进行校验的流程;具体地,如图3所示,第二摄像头集群会将采集到的帧图像信息流发送至第二跟踪定位系统,第二跟踪定位系统会基于接收到的帧图像信息流生成实际定位数据(即,目标定位数据),并将实际定位数据发送至服务系统,服务系统可以基于实际定位数据对虚拟定位数据进行校验。3 is a sequence diagram 2 of a virtual positioning method according to an embodiment of the present application. As shown in FIG. 3 , in addition to the virtual positioning method in FIG. 2 , the virtual positioning method also includes a process of verifying virtual positioning data; Specifically, as shown in FIG. 3 , the second camera cluster will send the collected frame image information stream to the second tracking and positioning system, and the second tracking and positioning system will generate actual positioning data based on the received frame image information stream (ie , target positioning data), and send the actual positioning data to the service system, and the service system can verify the virtual positioning data based on the actual positioning data.
即,第二跟踪定位系统接收到目标的第二摄像头集群的定位数据,并按照预定规则校验目标的虚拟定位数据和第二摄像头集群的定位数据,生成目标在第二摄像头集群定位数据校验结果。That is, the second tracking and positioning system receives the positioning data of the second camera cluster of the target, and verifies the virtual positioning data of the target and the positioning data of the second camera cluster according to a predetermined rule, and generates a verification of the positioning data of the target in the second camera cluster result.
在一种可选的实施例中,该虚拟定位方法还包括:响应于接力跟踪请求信号,并根据虚拟定位数据确定第二摄像头集群中的最佳采样摄像头;将最佳采样摄像头的属 性设置为第一摄像头集群,得到更新后的第一摄像头集群;获取更新后的第一摄像头集群采集的图像信息流;将图像信息流发送至定位系统,其中,定位系统基于图像信息流生成目标在更新后的第一摄像头集群中的目标定位数据。In an optional embodiment, the virtual positioning method further includes: in response to the relay tracking request signal, and determining the best sampling camera in the second camera cluster according to the virtual positioning data; and setting the attribute of the best sampling camera as The first camera cluster is to obtain the updated first camera cluster; the image information stream collected by the updated first camera cluster is acquired; the image information stream is sent to the positioning system, wherein the positioning system generates a target based on the image information stream after the update The object localization data in the first camera cluster of .
在该实施例中,可以响应于目标跟踪任务的接力跟踪请求,并根据第二摄像头集群的虚拟定位数据,确定第二摄像头集群中的最佳采样摄像头,转化最佳摄像头属性为第一摄像头集群;其中,这里的最佳采样摄像头属性被转换为第一摄像头集群后,用于接替原有跟踪任务现场采样摄像头,接力目标跟踪任务的帧图像信息采样。In this embodiment, the best sampling camera in the second camera cluster can be determined according to the virtual positioning data of the second camera cluster in response to the relay tracking request of the target tracking task, and the best camera attribute can be converted into the first camera cluster ; wherein, after the best sampling camera attribute here is converted into the first camera cluster, it is used to replace the original tracking task on-site sampling camera, and relay the frame image information sampling of the target tracking task.
需要说明的是,在本申请实施例中,目标跟踪任务接力跟踪请求的启动条件是目标位于第一摄像头集群取景预定区域,和/或,目标判断准确度低于阈值。下面进行详细说明。It should be noted that, in this embodiment of the present application, the starting condition of the target tracking task relay tracking request is that the target is located in the predetermined viewing area of the first camera cluster, and/or the target determination accuracy is lower than a threshold. A detailed description will be given below.
一个方面,响应于接力跟踪请求信号,包括:确定第一摄像头集群和第二摄像头集群所在预定应用场景中摄像头的分布密度;确定分布密度小于预定数值时,在目标位于第一摄像头集群取景区域的边缘预定位置时,响应于接力跟踪请求信号;或,确定分布密度不小于预定数值时,在目标离开第一摄像头集群取景区域的中间位置时,响应于接力跟踪请求信号。In one aspect, in response to the relay tracking request signal, the method includes: determining the distribution density of cameras in the predetermined application scene where the first camera cluster and the second camera cluster are located; when determining that the distribution density is less than a predetermined value, when the target is located in the viewing area of the first camera cluster. When the edge is at a predetermined position, in response to a relay tracking request signal; or, when it is determined that the distribution density is not less than a predetermined value, when the target leaves the middle position of the viewing area of the first camera cluster, in response to a relay tracking request signal.
在该实施例中,目标跟踪任务的接力跟踪请求可以根据摄像头现场布置的密度设定接力跟踪请求的启动标准。例如,摄像头密度小的场景下,各摄像头取景区域重叠度低,可以设定被跟踪目标位于第一摄像头集群采样摄像头取景区域边缘的预定范围时启动接力跟踪请求;例如,摄像头密度大的场景下,各摄像头取景区域重叠度高,可以设定被跟踪目标离开采样摄像头取景区域中间的预定范围时启动接力跟踪请求。例如,链式图像获取装置的摄像头密集非常高,可以设定一个启动标准高的接力跟踪请求条件,实现跟镜式的目标跟踪记录。In this embodiment, the relay tracking request of the target tracking task may set the start standard of the relay tracking request according to the density of the camera site arrangement. For example, in a scene with a low camera density, the overlapping degree of the viewing areas of each camera is low, and a relay tracking request can be initiated when the tracked target is located in a predetermined range at the edge of the sampling area of the first camera cluster sampling camera; for example, in a scene with a high camera density , the framing area of each camera has a high degree of overlap, and it can be set that the relay tracking request is started when the tracked target leaves the predetermined range in the middle of the framing area of the sampling camera. For example, the camera density of the chain image acquisition device is very high, and a relay tracking request condition with a high starting standard can be set to realize the mirror-based target tracking record.
另外一个方面,响应于接力跟踪请求信号,包括:确定目标的定位准确度低于预定阈值;响应于接力跟踪请求信号。In another aspect, responding to the relay tracking request signal includes: determining that the positioning accuracy of the target is lower than a predetermined threshold; and responding to the relay tracking request signal.
图4是根据本申请实施例的虚拟定位方法的时序图三,如图4所示,除了包括图2中的虚拟定位方式外,还包括以下步骤:服务系统启动目标跟踪任务接力跟踪请求,接着确定第二摄像头集群中的最佳采样摄像头,并将指定的最佳采样摄像头发送至第二摄像头集群,第二摄像头集群将最佳采样摄像头的属性转换为第一摄像头集群,并接收更新后的第一摄像头集群发送的帧图像信息流,服务系统会将接收到的帧图像信息流发送至跟踪定位系统,跟踪定位系统会根据更新后的第一摄像头集群发送的帧图像信息流生成目标定位数据,并将目标定位数据发送至服务系统。4 is a sequence diagram 3 of a virtual positioning method according to an embodiment of the present application. As shown in FIG. 4 , in addition to the virtual positioning method in FIG. 2 , it also includes the following steps: the service system starts a target tracking task relay tracking request, and then Determine the best sampled camera in the second camera cluster, and send the specified best sampled camera to the second camera cluster, the second camera cluster will convert the attributes of the best sampled camera to the first camera cluster, and receive the updated For the frame image information stream sent by the first camera cluster, the service system will send the received frame image information stream to the tracking and positioning system, and the tracking and positioning system will generate target positioning data according to the updated frame image information stream sent by the first camera cluster. , and send the target positioning data to the service system.
通过本申请实施例中的接力跟踪,可以确保目标跟踪任务的连贯性和提高目标定位准确率。Through the relay tracking in the embodiment of the present application, the continuity of the target tracking task can be ensured and the target positioning accuracy can be improved.
在一种可选的实施例中,在根据虚拟定位数据对目标进行虚拟定位之后,该虚拟定位方法还可以包括:响应于标识匹配任务,确定第二摄像头集群中包含有目标的至少一个摄像头;获取至少一个摄像头的帧图像信息;识别帧图像信息中的特征标识,并基于特征标识生成目标的目标标识信息。In an optional embodiment, after the virtual positioning of the target is performed according to the virtual positioning data, the virtual positioning method may further include: in response to the identification matching task, determining that the second camera cluster includes at least one camera of the target; Acquiring frame image information of at least one camera; identifying feature identifiers in the frame image information, and generating target identifier information of the target based on the feature identifiers.
在该实施例中,可以通过识别帧图像信息中的标识特征是图像特征,例如,人的面部特征,物体的轮廓特征,颜色特征、条形码或二维码等;特征标识可以是多种标识共同匹配关联到同一被跟踪目标,例如,对一辆汽车进行特征识别时,可以基于颜色特征关联汽车的颜色标识,可以基于轮廓特征关联汽车的类型标识,还可以车牌识别关联汽车的车牌号码标识。In this embodiment, the identification features in the frame image information can be identified as image features, for example, facial features of people, contour features of objects, color features, barcodes or two-dimensional codes, etc.; the feature identification can be a combination of multiple identifications Matching is associated with the same tracked target. For example, when performing feature recognition on a car, the color identification of the car can be associated with the color feature, the type identification of the car can be associated with the contour feature, and the license plate number identification of the associated car can be associated with the license plate recognition.
需要说明的是,在实际应用过程中,还可以匹配关联其它非图像特征,例如RFID射频特征、声音特征、可见光频闪特征或者运动特征等。It should be noted that, in the actual application process, other non-image features, such as RFID radio frequency features, sound features, visible light stroboscopic features or motion features, can also be matched and correlated.
另外,服务系统可以设定不同的标识匹配任务启动条件,例如,目标匹配任务设定固定的启动间隔时间,例如,响应于目标跟踪任务的目标判定准确度低于阈值,例如,响应于目标跟踪任务中的目标丢失,例如,响应于用户终端的标识验证指令,等等。在标识验证任务启动时,服务系统通过第二摄像头集群的虚拟定位数据确定在第二摄像头集群中包含有目标的摄像头,主运索取包含有目标的摄像头的帧图像缓存信息。标识验证任务通过对第一摄像头集群和第二摄像头集群中包含有目标的摄像头进行帧图像信息分析,生成目标标识信息,匹配于被跟踪目标,提升跟踪结果的准确性,减少目标跟踪任务偏误导致跟踪目标与目标标识错误关联匹配,有利于对目标的检索查询。In addition, the service system can set different starting conditions for the identification matching task, for example, the target matching task sets a fixed start interval, for example, in response to the target determination accuracy of the target tracking task being lower than the threshold, for example, in response to the target tracking A target in a task is lost, eg, in response to a user terminal's identity verification command, and so on. When the identification verification task is started, the service system determines the camera including the target in the second camera cluster through the virtual positioning data of the second camera cluster, and the main operation requests the frame image buffer information of the camera including the target. The identification verification task analyzes the frame image information of the cameras containing the target in the first camera cluster and the second camera cluster, generates target identification information, matches the tracked target, improves the accuracy of the tracking result, and reduces the target tracking task bias. Causes the tracking target and the target identification to be incorrectly associated and matched, which is beneficial to the retrieval query of the target.
图5是根据本申请实施例的虚拟定位方法的时序图四,如图5所示,除了包括图2中所示的虚拟定位方式流程外,还可以包括如下流程:服务系统可以启动标识匹配任务,并基于标识匹配任务来确定第二摄像头集群中包含有目标的摄像头,接着向第二摄像头集群索取指定摄像头的帧图像信息,识别帧图像信息中的标识特征生成目标的标识信息。FIG. 5 is a fourth sequence diagram of a virtual positioning method according to an embodiment of the present application. As shown in FIG. 5 , in addition to the virtual positioning method flow shown in FIG. 2 , the following flow may also be included: the service system may start an identification matching task. , and based on the identification matching task to determine the camera that contains the target in the second camera cluster, then request the frame image information of the specified camera from the second camera cluster, and identify the identification features in the frame image information to generate the identification information of the target.
在一种可选的实施例中,根据虚拟定位数据对目标进行虚拟定位,包括:响应于终端设备的目标查询指令,并获取目标查询指令的目标标识信息;基于目标标识信息结合目标定位数据和/或虚拟定位数据,确定取景摄像头;将取景摄像头采集的目标的视频流反馈至终端设备。In an optional embodiment, performing virtual positioning on the target according to the virtual positioning data includes: responding to a target query instruction of the terminal device, and obtaining target identification information of the target query instruction; combining the target positioning data and the target identification information based on the target identification information /or virtual positioning data to determine the viewfinder camera; and feed back the video stream of the target collected by the viewfinder camera to the terminal device.
在该实施例中,可以响应于用户终端的目标查询指令,并匹配查询指令的目标标识;根据第一摄像头集群的定位数据,和/或,第二摄像头集群的虚拟定位数据按照预定规则确定取景镜头;取景摄像头采集的帧图像信息发送给用户终端,以供用户终端查询。In this embodiment, the target query instruction of the user terminal can be responded to, and the target identifier of the query instruction can be matched; according to the positioning data of the first camera cluster and/or the virtual positioning data of the second camera cluster, the framing is determined according to predetermined rules lens; the frame image information collected by the viewfinder camera is sent to the user terminal for the user terminal to query.
图6是根据本申请实施例的虚拟定位方法的时序图五,如图6所示,该虚拟定位方法除了包括如图2所示的方式外,还包括如下步骤:服务系统生成目标标识信息,并在接收到用户终端的匹配被查询目标标识,确定优选取景摄像头;获取确定的第一摄像头集群中的取景摄像头的帧图像信息流,和/或,获取确定的第二摄像头集群中的取景摄像头的帧图像信息流,进而基于将生成的视频信息流发送至用户终端;其中,用户终端会优选取景摄像头位于第一摄像头集群或第二摄像头集群。6 is a sequence diagram 5 of a virtual positioning method according to an embodiment of the present application. As shown in FIG. 6 , in addition to the method shown in FIG. 2 , the virtual positioning method further includes the following steps: the service system generates target identification information, And after receiving the matching queried target identifier of the user terminal, determine the preferred viewfinder camera; obtain the frame image information flow of the viewfinder camera in the determined first camera cluster, and/or, obtain the determined viewfinder camera in the second camera cluster The frame image information stream generated is further sent to the user terminal based on the generated video information stream; wherein, the user terminal will preferably locate the viewfinder camera in the first camera cluster or the second camera cluster.
在一种可选的实施例中,本申请实施例提供的虚拟定位方法还可以响应于用户终端的切换显示角度指令,并根据第一摄像头集群的定位数据,和/或,第二摄像头集群的虚拟定位数据,按照预定规则切换取景摄像头;切换后的取景摄像头采集的帧图像信息发送给用户终端。In an optional embodiment, the virtual positioning method provided in this embodiment of the present application may also respond to an instruction of switching the display angle of the user terminal, and according to the positioning data of the first camera cluster, and/or the second camera cluster For virtual positioning data, the viewfinder camera is switched according to a predetermined rule; the frame image information collected by the switched viewfinder camera is sent to the user terminal.
图7是根据本申请实施例的虚拟定位方法的时序图六,如图7所示,其方式和图6基本相同,具体地,图7中服务系统响应于用户终端发送的切换显示视角指令。7 is a sequence diagram 6 of a virtual positioning method according to an embodiment of the present application. As shown in FIG. 7 , the method is basically the same as that in FIG. 6 . Specifically, the service system in FIG.
在一种可选的实施例中,在根据虚拟定位数据对目标进行虚拟定位之前,该虚拟定位方法还可以包括:响应于跟踪任务请求,根据虚拟定位数据确定第二摄像头集群中的定位校准摄像头;获取定位校准摄像头的帧图像缓存信息;基于帧图像缓存信息生成目标在定位校准摄像头的校准定位数据;根据校准定位数据修正目标定位数据。In an optional embodiment, before performing virtual positioning on the target according to the virtual positioning data, the virtual positioning method may further include: in response to the tracking task request, determining a positioning calibration camera in the second camera cluster according to the virtual positioning data ; Obtain the frame image cache information of the positioning calibration camera; generate the calibration positioning data of the target positioning and calibrating the camera based on the frame image cache information; modify the target positioning data according to the calibration positioning data.
在该实施例中,可以对目标进行协助定位;具体地,可以响应于目标跟踪任务请求,根据第二摄像头集群的虚拟定位数据,确定第二摄像头集群中的定位校准摄像头;获取定位校准摄像头的帧图像缓存信息;根据帧图像缓存信息生成目标在定位校准摄像头的定位数据;根据定位校准摄像头的定位数据修正目标定位数据。In this embodiment, the target can be assisted in positioning; specifically, in response to the target tracking task request, the positioning calibration camera in the second camera cluster can be determined according to the virtual positioning data of the second camera cluster; Frame image cache information; generate the positioning data of the target positioning calibration camera according to the frame image cache information; correct the target positioning data according to the positioning data of the positioning calibration camera.
其中,跟踪任务请求的启动条件包括以下之一:目标丢失、目标的判定准确度低于预定阈值、目标跟踪任务的预定计划。Wherein, the starting condition of the tracking task request includes one of the following: the target is lost, the determination accuracy of the target is lower than a predetermined threshold, and the predetermined plan of the target tracking task.
图8是根据本申请实施例的虚拟定位方法的时序图七,如图8所示,该虚拟定位方法处理包括如图2所示的步骤外,还可以包括如下步骤:服务系统基于目标跟踪任务启动协助定位请求,并确定第二摄像头集群中的定位校准摄像头,向第二摄像头集群索取指定摄像头的帧图像缓存信息,并获取第二摄像头集群反馈的帧图像缓存信息,服务系统基于帧图像缓存信息发送定位校准摄像头的虚拟定位数据和帧图像缓存信息 至跟踪定位系统生成目标在定位校准摄像头中的定位数据,并将目标在定位校准摄像头中的定位数据发送至服务系统,服务系统基于定位数据修正目标定位数据。FIG. 8 is a sequence diagram 7 of a virtual positioning method according to an embodiment of the present application. As shown in FIG. 8 , the processing of the virtual positioning method includes the steps shown in FIG. 2 , and may also include the following steps: the service system is based on the target tracking task Start the assisted positioning request, determine the positioning calibration camera in the second camera cluster, request the frame image cache information of the specified camera from the second camera cluster, and obtain the frame image cache information fed back by the second camera cluster. The service system is based on the frame image cache. Information Send the virtual positioning data and frame image buffer information of the positioning and calibration camera to the tracking and positioning system to generate the positioning data of the target in the positioning and calibration camera, and send the positioning data of the target in the positioning and calibration camera to the service system, and the service system is based on the positioning data. Corrected targeting data.
即,在该实施例中,服务系统根据目标在定位校准摄像头中的定位数据修正目标定位数据,实现目标跟踪的恢复;通过协助定位,可以解决现有技术在跟踪目标时被遮挡等因素影响跟踪丢失时难以继续进行跟踪,容易丢失原有跟踪目标的问题;该协助定位请求还可以是目标跟踪任务的请求计划任务,目标跟踪任务中配置有定时启动目标跟踪辅助请求的计划,动态的提取不同摄像头帧图像信息,提升跟踪结果的准确性。That is, in this embodiment, the service system corrects the target positioning data according to the positioning data of the target in the positioning calibration camera, so as to realize the recovery of target tracking; by assisting the positioning, it can solve the problem that the existing technology is blocked when tracking the target and other factors that affect the tracking When it is lost, it is difficult to continue tracking, and it is easy to lose the original tracking target; the assisted positioning request can also be the request plan task of the target tracking task. Camera frame image information to improve the accuracy of tracking results.
在一种可选的实施例中,该虚拟定位方法还可以包括:根据目标定位数据和/或虚拟定位数据,按照预定规则确定取景摄像头;将取景摄像头采集的帧图像信息发送至预定存储介质。In an optional embodiment, the virtual positioning method may further include: determining a viewfinder camera according to a predetermined rule according to target positioning data and/or virtual positioning data; and sending frame image information collected by the viewfinder camera to a predetermined storage medium.
在该实施例中,可以根据第一摄像头集群的定位数据,和/或,第二摄像头集群的虚拟定位数据,按照预定规则确定取景摄像头,取景摄像头采集的帧图像信息发送给存储系统。In this embodiment, a viewfinder camera may be determined according to a predetermined rule according to the positioning data of the first camera cluster and/or the virtual positioning data of the second camera cluster, and frame image information collected by the viewfinder camera is sent to the storage system.
图9是根据本申请实施例的虚拟定位方法的时序图八,如图9所示,该虚拟定位方法处理包括如图2所示的步骤外,还可以包括如下步骤:利用服务系统生成目标标识信息,并基于该标识信息生成最佳取景摄像头,并选取第一摄像头集群和第二摄像头集群中的采集摄像头,并获取采样摄像头的帧图像信息流,基于获取的帧图像信息流生成视频信息流,并将生成的视频信息流发送至用户终端。9 is a sequence diagram 8 of a virtual positioning method according to an embodiment of the present application. As shown in FIG. 9 , the processing of the virtual positioning method includes, in addition to the steps shown in FIG. 2 , the following steps: generating a target identifier by using a service system information, and generate the best view camera based on the identification information, and select the acquisition cameras in the first camera cluster and the second camera cluster, and obtain the frame image information flow of the sampling camera, and generate the video information flow based on the obtained frame image information flow. , and send the generated video information stream to the user terminal.
下面结合不同的场景对本申请实施例进行说明。The embodiments of the present application will be described below with reference to different scenarios.
场景实施例1Scenario Example 1
图10(a)是根据本申请实施例的景区的示意图一,该景区为游客提供一种MV艺术短片服务,景区的服务系统能够实现对游客面部特征识别后自动生成以游客为主角的MV艺术短片视频。其中,MV艺术短片模板的起始部分设定的是游客在各景点的正面走来的视频画面,服务系统在各景点摄像头采集群的视频库内检索包含有所述游客面部特征的视频画面,把检索到的包含面部特征的视频画面剪辑插入MV艺术短片的起始部分。其中,MV艺术短片模板的结束部分设定的是游客在各景点的背面走去的的视频画面,服务系统的无法通过现有技术检索到没有面部特征的游客背面画面。Figure 10(a) is a schematic diagram 1 of a scenic spot according to an embodiment of the present application, the scenic spot provides a kind of MV art short film service for tourists, and the service system of the scenic spot can automatically generate MV art with the tourists as the protagonists after recognizing the facial features of the tourists Short video. Among them, the initial part of the MV art short film template is set with the video images of tourists walking in front of each scenic spot, and the service system searches the video images containing the facial features of the tourists in the video database of the camera collection group of each scenic spot. Insert the retrieved video clips containing facial features into the beginning of the MV art clip. Among them, the end part of the MV art short template is set with the video images of tourists walking on the back of each scenic spot, and the service system cannot retrieve the back images of tourists without facial features through the existing technology.
以下内容是服务系统为视频画面匹配游客身体虚拟标识信息的实施过程。The following content is the implementation process of the service system matching the virtual identification information of tourists' bodies for video images.
图10(b)是根据本申请实施例的景区的示意图二,如图10(b)所示,景区的廊 道分别安置有三个取景摄像头camera1、camera2和camera3,映射模型系统中包含有三个取景摄像头的三维映射模型数据。所述三维映射模型数据可以有多种建模方法,例如,在三个取景相机之间放置一三角形样板,根据三角形样板的实际形状尺寸和每个摄像头取景图像中三角形三个角在取景图像中的坐标位置就可以构建出三个摄像头在三维空间中的对应函数映射关系。映射模型系统的初始化建模的过程,在本申请实施例中不作具体限定。Fig. 10(b) is a schematic diagram 2 of a scenic spot according to an embodiment of the present application. As shown in Fig. 10(b), the corridors of the scenic spot are respectively equipped with three framing cameras camera1, camera2 and camera3, and the mapping model system includes three framing cameras 3D mapping model data for the camera. The three-dimensional mapping model data can have various modeling methods. For example, a triangle template is placed between the three viewfinder cameras. According to the actual shape and size of the triangle template and the three corners of the triangle in the viewfinder image of each camera, in the viewfinder image The corresponding function mapping relationship of the three cameras in the three-dimensional space can be constructed. The process of initializing modeling of the mapping model system is not specifically limited in this embodiment of the present application.
在该实施例中,camera1和camera2被设定为第一摄像头集群,camera3被设定为第二摄像头集群。跟踪定位系统对第一摄像头集群中的camera1和camera2采集的帧图像信息进行处理,生成游客的定位数据。服务系统通过面部特征识别,对定位数据匹配游客标识信息。In this embodiment, camera1 and camera2 are set as the first camera cluster, and camera3 is set as the second camera cluster. The tracking and positioning system processes the frame image information collected by camera1 and camera2 in the first camera cluster to generate positioning data for tourists. The service system matches the tourist identification information to the positioning data through facial feature recognition.
另外,这里的服务系统提取映射模型系统中第一摄像头集群和第二摄像头集群的三维模型数据,取得camera1、camera2和camera3的三维映射关系;并且服务系统提取跟踪定位系统生成的游客定位数据,结合三维映射关系,通过计算处理,得到游客在camera3中对应的虚拟定位数据。In addition, the service system here extracts the three-dimensional model data of the first camera cluster and the second camera cluster in the mapping model system, and obtains the three-dimensional mapping relationship of camera1, camera2 and camera3; and the service system extracts the tourist positioning data generated by the tracking and positioning system, combined with The three-dimensional mapping relationship is calculated to obtain the virtual positioning data corresponding to the tourists in camera3.
此外,服务系统通过检索虚拟定位数据,即可以获得包含有游客背面画面的帧图像信息,满足MV艺术短片服务的素材需求。还可以通过设定不同的虚拟定位数据检索条件,满足不同的素材需求,例如设定检索虚拟定位数据符合游客面部定位于取景框中部的,可以得到展示身体上部的图像素材;设定检索虚拟定位数据符合游客面部定位于取景框外部上方预定区间的,可以得到只展示脚部和腿部的图像素材。In addition, by retrieving the virtual positioning data, the service system can obtain the frame image information including the back pictures of tourists, so as to meet the material requirements of the MV art short film service. It is also possible to set different retrieval conditions for virtual positioning data to meet different material requirements. For example, if the virtual positioning data for retrieval is set to match the positioning of the tourist’s face in the middle of the viewfinder, the image material showing the upper part of the body can be obtained; set the retrieval virtual positioning If the data is consistent with the tourist's face being positioned in the predetermined interval above the outside of the viewfinder, the image material that only shows the feet and legs can be obtained.
场景实施例2Scenario Example 2
图11是根据本申请实施例的物流中转站的示意图,如图所示,物流中转站在货物收发环节部署有物料监控交接服务系统,可以完成物料在中转站跟踪监控系统和运输车辆监控系统的无缝隙交接,可以实现物料在物流过程中的全程监控可追溯性。11 is a schematic diagram of a logistics transfer station according to an embodiment of the present application. As shown in the figure, a material monitoring and handover service system is deployed in the logistics transfer station in the goods receiving and dispatching link, which can complete the tracking and monitoring system for materials in the transfer station and the transport vehicle monitoring system. Seamless handover can realize the full monitoring and traceability of materials in the logistics process.
如图所示实施例中,物流中转站和运输车辆各自部署有全程监控系统,两套监控系统中的第一跟踪定位系统部署于物流中转站,是物料监控交接服务系统的本地跟踪定位系统;两套监控系统中的第二跟踪定位系统部署于运输车辆,是物料监控交接服务系统的流动跟踪定位系统。实施例中,物流中转站物料传送装置位置安装有第一摄像头集群的摄像头,被配置为第一跟踪定位系统采样摄像头,第一跟踪定位系统被配置为生成目标的第一摄像头集群定位数据;运输车辆货厢内安装有第二摄像头集群的摄像头,被配置为第二跟踪定位系统采样摄像头,第二跟踪定位系统被配置为生成目标的第二摄像头集群定位数据。第二摄像头集群是中转站本地监控系统的外部摄像头, 在中转站本地监控系统中的属性是本地虚拟摄像头,不属于本地实体摄像头;第二摄像头集群在运输车辆的监控系统中的属性是第一摄像头集群,是运输车辆的监控系统的本地实体摄像头。In the embodiment shown in the figure, the logistics transfer station and the transport vehicle are each deployed with a full-process monitoring system, and the first tracking and positioning system in the two sets of monitoring systems is deployed at the logistics transfer station, which is the local tracking and positioning system of the material monitoring and handover service system; The second tracking and positioning system in the two sets of monitoring systems is deployed on the transport vehicle and is the flow tracking and positioning system of the material monitoring and handover service system. In the embodiment, the camera of the first camera cluster is installed at the position of the material conveying device of the logistics transfer station, and is configured as the sampling camera of the first tracking and positioning system, and the first tracking and positioning system is configured to generate the positioning data of the first camera cluster of the target; transportation; A camera of the second camera cluster is installed in the cargo compartment of the vehicle, and is configured as a sampling camera of the second tracking and positioning system, and the second tracking and positioning system is configured to generate positioning data of the second camera cluster of the target. The second camera cluster is an external camera of the local monitoring system of the transfer station, and its attribute in the local monitoring system of the transfer station is a local virtual camera, not a local entity camera; the attribute of the second camera cluster in the monitoring system of the transport vehicle is the first The camera cluster is the local physical camera of the monitoring system of the transport vehicle.
需要说明的是,第一跟踪定位系统和第二跟踪定位系统通过扫描物料条形码进行标识信息的匹配,物料条形码是由物流工人在物料交接位置手持扫码枪机对物料标签进行人工扫码产生的。It should be noted that the first tracking and positioning system and the second tracking and positioning system match the identification information by scanning the barcode of the material. The barcode of the material is generated by the logistics worker manually scanning the material label with the barcode scanner at the handover position of the material. .
本地映射模型系统在监控物料交接前需要进行初始化建模,建立第一摄像头集群和第二摄像头集群的二维模型数据。二维映射数据是映射模型通过对第一摄像头集群和第二摄像头集群采集得到的图像信息进行重叠分析,构建出每个摄像头在二维模型中对应的映射关系。本实施例中图像信息进行重叠分析,是对图像信息中的部分重叠区域进行分析,如图所示,两个取景摄像头取景区域中的大部分被货厢门等遮挡物覆盖,两组图像信息中仅有小部分区域重叠。本实施例的映射模型系统在初始化建模过程中,一方面要避免遮挡物对建模过程的影响,另一方面要避免车厢内外摄像头取景环境的亮度不同对建模过程的影响。The local mapping model system needs to perform initial modeling before monitoring material handover, and establish two-dimensional model data of the first camera cluster and the second camera cluster. The two-dimensional mapping data is that the mapping model constructs the mapping relationship corresponding to each camera in the two-dimensional model by performing overlapping analysis on the image information collected by the first camera cluster and the second camera cluster. The overlapping analysis of the image information in this embodiment is to analyze the partial overlapping area in the image information. As shown in the figure, most of the viewing area of the two viewfinder cameras is covered by obstructions such as the cargo door, and the two sets of images Only a small area of the information overlaps. During the initial modeling process of the mapping model system of this embodiment, on the one hand, the influence of occluders on the modeling process should be avoided, and on the other hand, the influence of different brightness of the camera viewing environment inside and outside the vehicle on the modeling process should be avoided.
其中,物料监控交接服务系统还具有以下功能:提取映射模型系统中的二维模型数据,取得第一摄像头集群和第二摄像头集群的二维映射关系;提取第一跟踪定位系统生成的物料定位数据,结合二维映射关系,通过计算处理,得到物料在第二摄像头集群中对应的虚拟定位数据;接收部署于运输车辆的第二跟踪定位系统生成的物料在第二摄像头集群中的定位数据,校验所述物料在第二摄像头集群中的定位数据和虚拟定位数据的相符性,如果相符性大于预定阈值,则判定在两套监控系统中物料交接的过程具有完备性,物料监控交接系统即判定本地跟踪定位系统和流动跟踪定位系统通过物料交接验证。两套跟踪定位系统对物料的跟踪线程进行相互验证,便于物料追溯的同步检索。The material monitoring and handover service system also has the following functions: extracting the two-dimensional model data in the mapping model system, obtaining the two-dimensional mapping relationship between the first camera cluster and the second camera cluster; extracting the material positioning data generated by the first tracking and positioning system , Combined with the two-dimensional mapping relationship, the virtual positioning data corresponding to the material in the second camera cluster is obtained through calculation processing; the positioning data of the material in the second camera cluster generated by the second tracking and positioning system deployed on the transport vehicle is received, and the calibration Check the consistency between the positioning data of the material in the second camera cluster and the virtual positioning data. If the consistency is greater than the predetermined threshold, it is determined that the process of material handover in the two monitoring systems is complete, and the material monitoring and handover system is determined. The local tracking and positioning system and the flow tracking and positioning system are verified by material handover. Two sets of tracking and positioning systems mutually verify the tracking thread of the material, which is convenient for the synchronous retrieval of material traceability.
场景实施例3Scenario Example 3
图12是根据本申请实施例的交通监控区域的示意图,如图所示,城市交通部门在重点交通监控区域安装有密集的摄像头,实现对道路车辆进行跟踪式监控。12 is a schematic diagram of a traffic monitoring area according to an embodiment of the present application. As shown in the figure, the urban traffic department installs intensive cameras in key traffic monitoring areas to implement tracking monitoring of road vehicles.
实施例中,摄像头密集,每一辆车都同时处于多个摄像头的取景区域内。当服务系统启动针对某一辆车的跟踪任务线程后,服务系统按预定规则在取景区域包含有该车辆的摄像头中的一个摄像头进行跟踪任务图像采样,配置采样摄像头的属性为第一摄像头集群;其余摄像头被配置为所述跟踪任务线程的第二摄像头集群。In the embodiment, the cameras are dense, and each vehicle is simultaneously in the viewing area of multiple cameras. After the service system starts the tracking task thread for a certain vehicle, the service system samples the tracking task image from one of the cameras of the vehicle in the viewing area according to a predetermined rule, and configures the attribute of the sampling camera as the first camera cluster; The remaining cameras are configured as the second camera cluster of the tracking task thread.
服务系统基于采集到的图像对被跟踪目标进行特征分析,可以生成车辆颜色、车 辆类型和车辆号牌三种类型目标标识。例如,服务系统通过对采样图像的特征识别,可以对目标同时匹配“NO.002红色车”、“NO.005轿车“和”88888车牌“三个目标标识,服务系统可以通过这三个标识信息对车辆的监控记录进行检索。The service system analyzes the characteristics of the tracked target based on the collected images, and can generate three types of target identifications: vehicle color, vehicle type and vehicle number plate. For example, the service system can match the three target identifiers of "NO.002 red car", "NO.005 car" and "88888 license plate" to the target at the same time by identifying the features of the sampled images. The service system can use these three identification information Retrieve the monitoring records of the vehicle.
服务系统中的映射模型系统通过初始化建模,生成全部摄像头的二维模型数据,所述二维模型数据是映射模型系统通过对各摄像头采集得到的图像信息进行重叠分析,构建出每个摄像头在二维模型中对应的映射关系。The mapping model system in the service system generates the two-dimensional model data of all cameras by initializing the modeling. The corresponding mapping relationship in the 2D model.
根据跟踪定位系统生成的被跟踪车辆在第一摄像头集群中的定位数据,结合提取映射模型系统中的二维模型数据,通过计算处理,得到被跟踪车辆在第二摄像头集群中对应的虚拟定位数据。因摄像头密集,被跟踪车辆目标同时匹配有多个摄像头的虚拟定位数据。According to the positioning data of the tracked vehicle in the first camera cluster generated by the tracking and positioning system, combined with the two-dimensional model data in the extraction and mapping model system, the virtual positioning data corresponding to the tracked vehicle in the second camera cluster is obtained through computational processing. . Due to the dense number of cameras, the target of the tracked vehicle is matched with the virtual positioning data of multiple cameras at the same time.
当被跟踪车辆行驶至采样摄像头取景区域的预定范围后,继续使用该摄像头作为跟踪任务线程的采样摄像头,会影响目标跟踪定位的准确性,还可能导致被跟踪目标丢失。服务系统响应于目标跟踪任务的接力跟踪请求,根据所述虚拟定位数据,确定第二摄像头集群中的最佳采样摄像头,转化所述摄像头属性为第一摄像头集群,用于接替原有跟踪任务线程采样摄像头,接力目标跟踪任务的帧图像信息采样。通过接力跟踪,可以确保目标跟踪任务的连贯性。When the tracked vehicle travels to the predetermined range of the sampling camera viewing area, continuing to use the camera as the sampling camera of the tracking task thread will affect the accuracy of target tracking and positioning, and may also lead to the loss of the tracked target. In response to the relay tracking request of the target tracking task, the service system determines the best sampling camera in the second camera cluster according to the virtual positioning data, and converts the camera attributes into the first camera cluster, which is used to replace the original tracking task thread Sampling the camera to sample the frame image information of the relay target tracking task. Through relay tracking, the continuity of target tracking tasks can be ensured.
本实施例中,服务系统的标识匹配任务每间隔5秒启动一次,减少目标跟踪任务偏误导致跟踪目标与目标标识错误关联匹配。在标识验证任务启动时,服务系统通过第二摄像头集群的虚拟定位数据确定在第二摄像头集群中包含有目标的摄像头,主运索取包含有目标的摄像头的帧图像信息。标识验证任务通过对第一摄像头集群和第二摄像头集群中包含有目标的摄像头进行帧图像信息分析,生成目标标识信息,匹配于被跟踪目标,提升跟踪结果的准确性。In this embodiment, the identification matching task of the service system is started every 5 seconds, which reduces the deviation of the target tracking task and causes the tracking target and the target identification to be erroneously associated and matched. When the identification verification task is started, the service system determines the camera including the target in the second camera cluster through the virtual positioning data of the second camera cluster, and the main operator requests the frame image information of the camera including the target. The identification verification task analyzes the frame image information of the cameras containing the target in the first camera cluster and the second camera cluster, generates target identification information, matches the tracked target, and improves the accuracy of the tracking result.
场景实施例4Scenario Example 4
图13是根据本申请实施例的幼儿园的示意图,如图所示,幼儿园为家长提供一种多视角查看儿童活动的实时视频服务,幼儿园的所有公共场所都安装有链式图像获取装置。13 is a schematic diagram of a kindergarten according to an embodiment of the present application. As shown in the figure, the kindergarten provides parents with a real-time video service for viewing children's activities from multiple perspectives. All public places of the kindergarten are equipped with chain image acquisition devices.
所述链式图像获取装置中每个摄像头间距0.2米,映射模型系统通过现有各摄像头采集得到的图像信息进行重叠区域分析,就可以构建出每一个摄像头在二维模型中对应的映射关系。The distance between each camera in the chain image acquisition device is 0.2 meters, and the mapping model system can construct the corresponding mapping relationship of each camera in the two-dimensional model by analyzing the overlapping area through the image information collected by the existing cameras.
实施例中,服务系统在链式图像获取装置中的相邻的20个摄像头确定1个摄像头为目标跟踪任务采样摄像头,目标跟踪任务采样摄像头的属性被配置为第一摄像头集 群,没有被配置为第一摄像头集群的其它摄像头被配置为第二摄像头集群。在摄像头现场布置密集的应用场景中,如果每个摄像头都用来采样进行目标跟踪任务,一方面系统难以负担,另一方面过度采样进行跟踪定位任务造成系统资源的严重浪费。In the embodiment, the service system determines 1 camera as the target tracking task sampling camera among the 20 adjacent cameras in the chain image acquisition device, and the attribute of the target tracking task sampling camera is configured as the first camera cluster, and is not configured as the first camera cluster. The other cameras of the first camera cluster are configured as the second camera cluster. In an application scenario where cameras are densely arranged on-site, if each camera is used for sampling for target tracking tasks, on the one hand, the system is unaffordable, and on the other hand, over-sampling for tracking and positioning tasks results in a serious waste of system resources.
实施例中,跟踪定位系统中包含有多套目标检测算法,包括帧差法、背景相减法、期望最大化法、光流法、统计模型法、水平集方法等等,可以根据场景需求实时切换目标检测算法。服务系统预先存储有被跟踪对象的人脸特征,按预定计算频率对被跟踪定位目标进行标识验证,实现被跟踪定位目标的身份标识匹配。In the embodiment, the tracking and positioning system includes multiple sets of target detection algorithms, including frame difference method, background subtraction method, expectation maximization method, optical flow method, statistical model method, level set method, etc., which can be switched in real time according to scene requirements. Object detection algorithm. The service system pre-stores the facial features of the tracked object, performs identification verification on the tracked positioning target according to a predetermined calculation frequency, and realizes the identity identification matching of the tracked positioning target.
实施例中,虚拟定位系统接收到各身份标识的定位数据后,提取映射模型系统中第一摄像头集群和第二摄像头集群的二维模型数据,取得所有摄像头的二维映射关系,通过计算处理,得到各身份标识在第二摄像头集群中各摄像头的对应虚拟定位数据。In the embodiment, after receiving the positioning data of each identity mark, the virtual positioning system extracts the two-dimensional model data of the first camera cluster and the second camera cluster in the mapping model system, obtains the two-dimensional mapping relationship of all the cameras, and through calculation processing, The corresponding virtual positioning data of each camera in the second camera cluster of each identity identifier is obtained.
如图所示,服务系统接收到儿童家长通过用户终端操作产生的目标查询指令,响应于所述目标查询指令,匹配查询指令的身份标识(即匹配家长需要查询儿童目标的面部特征标识)。服务系统通过检索对应身份标识的定位数据和虚拟定位数据,确定取景范围内包含有目标的摄像头(即确定取景范围内包含有待查询儿童的摄像头)。从取景范围内包含有目标的摄像头集群中选取目标最接近取景范围中间区域的优选摄像头为取景摄像头。从被确定的优选取景摄像头提取帧图像信息,转化为视频信息流发送到用户终端,用于向用户展示。As shown in the figure, the service system receives the target query instruction generated by the child's parent through the operation of the user terminal, and in response to the target query instruction, matches the identity identifier of the query instruction (that is, matches the facial feature identifier that the parent needs to query the child's target). By retrieving the positioning data and virtual positioning data corresponding to the identity identifier, the service system determines the camera that contains the target within the viewing range (that is, determines that the viewing range includes the camera of the child to be queried). The preferred camera with the target closest to the middle area of the framing range is selected from the camera cluster including the target within the framing range as the framing camera. The frame image information is extracted from the determined preferred viewing camera, converted into a video information stream and sent to the user terminal for display to the user.
如图所示,服务系统接收到儿童家长通过用户终端操作产生的切换显示视角指令,服务系统通过检索对应身份标识的定位数据和虚拟定位数据,确定取景范围内包含有目标的摄像头,按切换显示视角指令的切换方向切换取景范围内包含有目标的摄像头为取景摄像头;所述取景摄像头采集的帧图像转化为视频信息流发送给用户终端,用于向操作者展示。As shown in the figure, the service system receives an instruction to switch the display angle of view generated by the parent of the child through the user terminal operation. The switching direction of the viewing angle command switches the camera that includes the target within the viewing range to be the viewing camera; the frame image collected by the viewing camera is converted into a video information stream and sent to the user terminal for display to the operator.
场景实施例5Scenario Example 5
图14是根据本申请实施例的地铁运营公司的示意图,如图所示,地铁运营公司部署有一种多视角的跟镜式监控记录系统,通过不同取景方向的链式图像获取装置,实现同时对被跟踪目标的多角度跟镜式监控记录。链式图像获取装置通过不同摄像头采集帧图像信息,然后通过预定规则对不同摄像头采集的帧图像信息集群进行采样生成视频,可以实现镜头跟随目标进行移动的视频监控记录。地铁运营公司通过部署链式图像获取装置,可以对进入地铁站的每一个乘客启动全程无间断的跟镜式监控记录线程,所述全程可以是乘客从进入第一地铁站→上地铁列车离开第一地铁站→乘坐地铁列车→下地铁列车进入第二地铁站→离开第二地铁站的整个行踪过程。每一个跟镜式 监控记录线程对应一个被跟踪记录目标,多个跟镜式监控记录线程分别对应多个被跟踪记录目标,启动记录线程数量是由服务系统计算能力所决定的。密集的部署多条链式图像获取装置,可以实现对进入地铁站的乘客多角度的跟踪视频记录,一个跟镜式监控记录线程可以包含多个角度的画面,对应的系统管理人员可以同时实时查看和查询到被跟踪目标的多个角度的跟镜式监控记录。14 is a schematic diagram of a subway operation company according to an embodiment of the present application. As shown in the figure, the subway operation company deploys a multi-view mirror-type monitoring and recording system. Multi-angle mirror monitoring record of the tracked target. The chain image acquisition device collects frame image information through different cameras, and then samples the frame image information clusters collected by different cameras through predetermined rules to generate video, which can realize the video monitoring record of the lens moving with the target. By deploying the chain image acquisition device, the subway operating company can start an uninterrupted mirror-based monitoring and recording thread for each passenger entering the subway station. One subway station → take the subway train → get off the subway train and enter the second subway station → leave the entire whereabouts process of the second subway station. Each mirror-based monitoring and recording thread corresponds to a tracked and recorded target, and multiple mirror-based monitoring and recording threads correspond to multiple tracked and recorded targets respectively. The number of start-up recording threads is determined by the computing power of the service system. Intensive deployment of multiple chain image acquisition devices can realize multi-angle tracking video recording of passengers entering the subway station. A mirror-type monitoring and recording thread can contain images from multiple angles, and corresponding system managers can view it in real time at the same time. And query to tracked targets from multiple angles of the mirror monitoring records.
由于地铁站是公共场所,按部分地区相关隐私保障法律规定,在公共场所不能对公众使用人脸识别技术,本实施例的标识验证装置是设置于地铁站闸机口,所述标识验证装置是地铁票卡的RFID标识或票卡的图像码标识。通过服务系统查询某个目标的跟踪记录只能通过票卡对应标识进行检索,无法使用包括人脸识别在内的涉及个人隐私的目标特征作为标识进行检索。Since the subway station is a public place, according to the relevant privacy protection laws in some areas, the face recognition technology cannot be used for the public in the public place. The identification verification device in this embodiment is set at the gate of the subway station, and the identification verification device is The RFID identification of the subway ticket card or the image code identification of the ticket card. Querying the tracking records of a certain target through the service system can only be retrieved by the corresponding identification of the ticket card, and cannot use the target features involving personal privacy, including face recognition, as the identification for retrieval.
所述链式图像获取装置中每个摄像头间距为0.4米,可以在取景区域内进行1fps低帧率的镜头动态跟踪视频,例如按链式图像获取装置的逐镜获取规则进行目标跟踪,可以生成取景镜头每秒移动0.4米的跟镜式视频呈现效果。The distance between each camera in the chain image acquisition device is 0.4 meters, and a 1fps low frame rate lens dynamic tracking video can be performed in the viewing area. The framing lens moves 0.4 meters per second with a mirror-like video presentation effect.
映射模型系统通过人工智能系统分析各链式图像获取装置全部摄像头同一时间采集的图像信息进行分析,构建出系统内所有链式图像获取装置的全部摄像头在三维模型中对应的映射关系。映射模型系统中还预置有动态模型,所述动态模型是不同条件下对应的不同三维模型。地铁站内的摄像头集群与不同地铁列车内的摄像头集群的映射关系,以及地铁列车内的摄像头集群与不同的地铁站摄像头集群的映射关系,是地铁例车停靠地铁站的前提条件确定之后选择确定的。The mapping model system analyzes the image information collected at the same time by all the cameras of each chain image acquisition device through the artificial intelligence system, and constructs the corresponding mapping relationship of all the cameras of all chain image acquisition devices in the system in the three-dimensional model. Dynamic models are also preset in the mapping model system, and the dynamic models are different three-dimensional models corresponding to different conditions. The mapping relationship between the camera clusters in the subway station and the camera clusters in different subway trains, as well as the mapping relationship between the camera clusters in the subway train and different subway station camera clusters, are selected after the preconditions for the subway routine car to stop at the subway station are determined. .
实施例中,链式图像获取装置密集分布,相反取景方向的链式图像获取装置相对并行分布,服务系统通过人工智能分析选择确定目标跟踪任务的采样摄像头,目标跟踪任务采样摄像头的属性被配置为第一摄像头集群,没有被配置为第一摄像头集群的其它摄像头被配置为第二摄像头集群。通过人工智能分析选择确定目标任务的采样摄像头,可以实现用最少的摄像头对目标进行跟踪采样,减少系统执行目标跟踪任务的计算量,降低系统负载。跟踪定位系统使用深度神经网络对各个场所的各个目标进行跟踪定位,并产生各个目标的定位数据。In the embodiment, the chain image acquisition devices are densely distributed, and the chain image acquisition devices in opposite viewing directions are relatively parallel distributed. The service system selects the sampling camera for the target tracking task through artificial intelligence analysis, and the attributes of the target tracking task sampling camera are configured as: The first camera cluster, and other cameras not configured as the first camera cluster are configured as the second camera cluster. By selecting the sampling camera to determine the target task through artificial intelligence analysis, the target can be tracked and sampled with the least number of cameras, reducing the calculation amount of the system to perform the target tracking task and reducing the system load. The tracking and positioning system uses the deep neural network to track and locate each target in each place, and generates the positioning data of each target.
实施例中,虚拟定位系统提取映射模型系统中第一摄像头集群和第二摄像头集群的动态模型数据,在地铁列车停靠站台后,更新所有摄像头在当前确定条件下的三维映射关系。虚拟定位系统获取跟踪定位系统生成的各目标的定位数据,结合三维映射关系,通过计算处理,得到各目标在第二摄像头集群中各摄像头的对应虚拟定位数据。In the embodiment, the virtual positioning system extracts the dynamic model data of the first camera cluster and the second camera cluster in the mapping model system, and after the subway train stops at the platform, updates the three-dimensional mapping relationship of all cameras under the currently determined conditions. The virtual positioning system obtains the positioning data of each target generated by the tracking and positioning system, and combines the three-dimensional mapping relationship to obtain the corresponding virtual positioning data of each target in each camera in the second camera cluster through calculation processing.
服务系统根据第一摄像头集群的定位数据和第二摄像头集群的虚拟定位数据,对 应于目标跟踪任务,确定被跟踪目标和取景范围内包含有目标的摄像头,在每条图像获取链选取确定最佳取景摄像头。According to the positioning data of the first camera cluster and the virtual positioning data of the second camera cluster, corresponding to the target tracking task, the service system determines the tracked target and the camera that contains the target within the viewing range, and selects the best camera for each image acquisition chain. Viewfinder camera.
服务系统获取最佳取景摄像头的帧图像信息,转化为视频信息流发送给存储系统。存储系统存储接收到的视频信息流,用于服务系统管理人员的检索查询。The service system obtains the frame image information of the best view camera, converts it into a video information stream and sends it to the storage system. The storage system stores the received video information stream for serving the retrieval query of system administrators.
在目标跟踪任务中,如果跟踪目标丢失或目标判定准确度低于阈值,目标跟踪任务开始启动协助定位请求。服务系统响应于辅助定位请求,根据虚拟定位数据,确定第二摄像头集群中满足预定条件的摄像头为定位校准摄像头,获取定位校准摄像头的帧图像缓存信息,根据定位校准摄像头的虚拟定位数据和帧图像缓存信息进行目标跟踪定位分析,生成目标在定位校准摄像头中的定位数据。服务系统根据目标在定位校准摄像头中的定位数据修正目标定位数据,实现目标跟踪的恢复。通过协助定位,可以解决现有技术在跟踪目标时被遮挡等因素影响跟踪丢失时难以继续进行跟踪,容易丢失原有跟踪目标的问题。所述协助定位请求还可以是目标跟踪任务的请求计划任务,目标跟踪任务中配置有定时启动目标跟踪辅助请求的计划,动态的提取不同摄像头帧图像信息,提升跟踪结果的准确性。In the target tracking task, if the tracking target is lost or the target determination accuracy is lower than the threshold, the target tracking task starts to initiate an assisting positioning request. In response to the auxiliary positioning request, the service system determines, according to the virtual positioning data, a camera in the second camera cluster that meets the predetermined condition as a positioning calibration camera, obtains the frame image cache information of the positioning calibration camera, and obtains the virtual positioning data and frame image of the positioning calibration camera according to the positioning calibration camera. The cached information is used for target tracking and positioning analysis, and the positioning data of the target in the positioning calibration camera is generated. The service system corrects the target positioning data according to the positioning data of the target in the positioning calibration camera, and realizes the recovery of target tracking. By assisting the positioning, it can solve the problem that it is difficult to continue the tracking when the tracking is lost due to factors such as being occluded when the tracking target is being tracked, and the original tracking target is easily lost. The assisted positioning request may also be a request plan task of the target tracking task, and the target tracking task is configured with a plan for regularly starting the target tracking auxiliary request, and dynamically extracts image information of different camera frames to improve the accuracy of the tracking result.
根据本申请实施例的另外一个方面,还提供了一种虚拟定位装置,图15是根据本申请实施例的虚拟定位装置的示意图,如图15所示,该虚拟定位装置包括:获取单元1501,发送单元1503以及虚拟定位单元1505。下面对该虚拟定位装置进行说明。According to another aspect of the embodiment of the present application, a virtual positioning device is also provided. FIG. 15 is a schematic diagram of the virtual positioning device according to the embodiment of the present application. As shown in FIG. 15 , the virtual positioning device includes: an obtaining unit 1501, The sending unit 1503 and the virtual positioning unit 1505. The virtual positioning device will be described below.
获取单元1501,用于获取目标在第一摄像头集群中的目标定位数据,其中,第一摄像头集群是定位系统中用于采用的摄像头,定位系统被用于执行目标跟踪任务以生成定位数据。The obtaining unit 1501 is configured to obtain target positioning data of a target in a first camera cluster, wherein the first camera cluster is a camera used in a positioning system, and the positioning system is used to perform a target tracking task to generate positioning data.
发送单元1503,用于将目标定位数据发送至定位系统,其中,定位系统利用目标映射模型和目标定位数据生成目标在第二摄像头集群中的虚拟定位数据,目标映射模型用于描述第一摄像头集群与第二摄像头集群在空间上的映射关系。The sending unit 1503 is configured to send the target positioning data to the positioning system, wherein the positioning system uses the target mapping model and the target positioning data to generate virtual positioning data of the target in the second camera cluster, and the target mapping model is used to describe the first camera cluster The mapping relationship with the second camera cluster in space.
虚拟定位单元1505,用于根据虚拟定位数据对目标进行虚拟定位。The virtual positioning unit 1505 is configured to perform virtual positioning on the target according to the virtual positioning data.
此处需要说明的是,上述获取单元1501,发送单元1503以及虚拟定位单元1505对应于实施例1中的步骤S102至S106,上述单元与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。需要说明的是,上述单元作为装置的一部分可以在诸如一组计算机可执行指令的计算机系统中执行。It should be noted here that the above obtaining unit 1501, sending unit 1503 and virtual positioning unit 1505 correspond to steps S102 to S106 in Embodiment 1, and the examples and application scenarios implemented by the above units and corresponding steps are the same, but not limited to The content disclosed in Example 1 above. It should be noted that the above-mentioned units may be executed in a computer system such as a set of computer-executable instructions as part of an apparatus.
由上可知,在本申请上述实施例中,可以利用获取单元获取目标在第一摄像头集群中的目标定位数据,其中,第一摄像头集群是定位系统中用于采用的摄像头,定位系统被用于执行目标跟踪任务以生成定位数据;然后利用发送单元将目标定位数据发 送至定位系统,其中,定位系统利用目标映射模型和目标定位数据生成目标在第二摄像头集群中的虚拟定位数据,目标映射模型用于描述第一摄像头集群与第二摄像头集群在空间上的映射关系;并利用虚拟定位单元根据虚拟定位数据对目标进行虚拟定位。通过本申请实施例提供的虚拟定位装置,实现了通过摄像头集群协同对目标进行定位的目的,达到了提高对目标进行定位的精确度的技术效果,解决了相关技术中图像传感器应用比较单一,无法利用图像传感器进行大规模的协同跟踪的技术问题。As can be seen from the above, in the above-mentioned embodiments of the present application, the target positioning data of the target in the first camera cluster can be obtained by using the acquisition unit, wherein the first camera cluster is the camera used in the positioning system, and the positioning system is used for Perform the target tracking task to generate positioning data; then use the sending unit to send the target positioning data to the positioning system, wherein the positioning system uses the target mapping model and the target positioning data to generate virtual positioning data of the target in the second camera cluster, and the target mapping model It is used to describe the mapping relationship between the first camera cluster and the second camera cluster in space; and the virtual positioning unit is used to perform virtual positioning on the target according to the virtual positioning data. The virtual positioning device provided by the embodiment of the present application achieves the purpose of locating the target through the camera cluster, achieves the technical effect of improving the accuracy of locating the target, and solves the problem that the application of the image sensor in the related art is relatively simple and cannot be Technical issues of large-scale collaborative tracking using image sensors.
根据本申请实施例的另外一个方面,还提供了一种虚拟定位装置,包括:获取单元,设置为获取目标在第一摄像头集群中的目标定位数据,其中,第一摄像头集群是定位系统中用于采用的摄像头,定位系统被用于执行目标跟踪任务以生成定位数据;发送单元,设置为将目标定位数据发送至定位系统,其中,定位系统利用目标映射模型和目标定位数据生成目标在第二摄像头集群中的虚拟定位数据,目标映射模型用于描述第一摄像头集群与第二摄像头集群在空间上的映射关系;虚拟定位单元,设置为根据虚拟定位数据对目标进行虚拟定位。According to another aspect of the embodiments of the present application, a virtual positioning device is also provided, including: an obtaining unit configured to obtain target positioning data of a target in a first camera cluster, wherein the first camera cluster is used in the positioning system For the camera used, the positioning system is used to perform the target tracking task to generate positioning data; the sending unit is configured to send the target positioning data to the positioning system, wherein the positioning system uses the target mapping model and the target positioning data to generate the target in the second. The virtual positioning data in the camera cluster, the target mapping model is used to describe the spatial mapping relationship between the first camera cluster and the second camera cluster; the virtual positioning unit is configured to perform virtual positioning on the target according to the virtual positioning data.
在一种可选的实施例中,目标映射模型为定位系统从映射模型系统获取的映射模型,其中,映射模型系统通过初始化建模的方式生成包含有预定应用场景内各摄像头位置和取景角度关系的映射模型。In an optional embodiment, the target mapping model is a mapping model obtained by the positioning system from the mapping model system, wherein the mapping model system generates a mapping model including the relationship between the positions of each camera and the viewing angle in the predetermined application scene by means of initializing modeling the mapping model.
在一种可选的实施例中,该虚拟定位装置还包括:校验单元,设置为在根据虚拟定位数据对目标进行虚拟定位之前,对虚拟定位数据进行校验;其中,校验单元,包括:第一获取模块,设置为获取目标在第二摄像头集群中的实际定位数据;校验模块,设置为利用预定校验规则确定实际定位数据与虚拟定位数据的相似度,以对虚拟定位数据与实际定位数据之前的一致性进行校验。In an optional embodiment, the virtual positioning device further includes: a verification unit, configured to verify the virtual positioning data before performing virtual positioning on the target according to the virtual positioning data; wherein the verification unit includes : a first acquisition module, configured to acquire the actual positioning data of the target in the second camera cluster; a verification module, configured to use a predetermined verification rule to determine the similarity between the actual positioning data and the virtual positioning data, so as to compare the virtual positioning data with the virtual positioning data. The consistency before the actual positioning data is checked.
在一种可选的实施例中,该虚拟定位装置还包括:第一确定单元,设置为响应于接力跟踪请求信号,并根据虚拟定位数据确定第二摄像头集群中的最佳采样摄像头;设置单元,设置为将最佳采样摄像头的属性设置为第一摄像头集群,得到更新后的第一摄像头集群;获取单元,设置为获取更新后的第一摄像头集群采集的图像信息流;发送单元,设置为将图像信息流发送至定位系统,其中,定位系统基于图像信息流生成目标在更新后的第一摄像头集群中的目标定位数据。In an optional embodiment, the virtual positioning device further includes: a first determining unit, configured to respond to the relay tracking request signal and determine the best sampling camera in the second camera cluster according to the virtual positioning data; a setting unit , set to set the attribute of the best sampling camera as the first camera cluster, and obtain the updated first camera cluster; the acquisition unit, set to obtain the image information stream collected by the updated first camera cluster; the sending unit, set to The image information stream is sent to the positioning system, wherein the positioning system generates target positioning data of the target in the updated first camera cluster based on the image information stream.
在一种可选的实施例中,第一确定单元,包括:第一确定模块,设置为确定第一摄像头集群和第二摄像头集群所在预定应用场景中摄像头的分布密度;第二确定模块,设置为确定分布密度小于预定数值时,在目标位于第一摄像头集群取景区域的边缘预定位置时,响应于接力跟踪请求信号;或,第三确定模块,设置为确定分布密度不小于预定数值时,在目标离开第一摄像头集群取景区域的中间位置时,响应于接力跟踪 请求信号。In an optional embodiment, the first determination unit includes: a first determination module configured to determine the distribution density of cameras in the predetermined application scenarios where the first camera cluster and the second camera cluster are located; the second determination module configured to determine In order to determine that when the distribution density is less than a predetermined value, when the target is located at a predetermined position on the edge of the viewing area of the first camera cluster, in response to the relay tracking request signal; or, the third determining module is set to determine that the distribution density is not less than the predetermined value. When the target leaves the middle position of the viewing area of the first camera cluster, it responds to the relay tracking request signal.
在一种可选的实施例中,第一确定单元,包括:第四确定模块,设置为确定目标的定位准确度低于预定阈值;响应模块,设置为响应于接力跟踪请求信号。In an optional embodiment, the first determination unit includes: a fourth determination module configured to determine that the positioning accuracy of the target is lower than a predetermined threshold; and a response module configured to respond to a relay tracking request signal.
在一种可选的实施例中,该虚拟定位装置还包括:第二确定单元,设置为在根据虚拟定位数据对目标进行虚拟定位之后,响应于标识匹配任务,确定第二摄像头集群中包含有目标的至少一个摄像头;获取单元,设置为获取至少一个摄像头的帧图像信息;第一生成单元,设置为识别帧图像信息中的特征标识,并基于特征标识生成目标的目标标识信息。In an optional embodiment, the virtual positioning device further includes: a second determining unit, configured to, after virtual positioning of the target according to the virtual positioning data, in response to the identification matching task, determine that the second camera cluster contains at least one camera of the target; an acquiring unit configured to acquire frame image information of the at least one camera; and a first generating unit configured to recognize feature identifiers in the frame image information, and generate target identifier information of the target based on the feature identifiers.
在一种可选的实施例中,虚拟定位单元,包括:第二获取模块,设置为响应于终端设备的目标查询指令,并获取目标查询指令的目标标识信息;第五确定模块,设置为基于目标标识信息结合目标定位数据和/或虚拟定位数据,确定取景摄像头;反馈模块,设置为将取景摄像头采集的目标的视频流反馈至终端设备。In an optional embodiment, the virtual positioning unit includes: a second obtaining module, configured to respond to a target query instruction of the terminal device, and obtain target identification information of the target query command; a fifth determining module, configured to be based on The target identification information is combined with target positioning data and/or virtual positioning data to determine the viewfinder camera; the feedback module is configured to feed back the video stream of the target collected by the viewfinder camera to the terminal device.
在一种可选的实施例中,该虚拟定位装置还包括:第三确定单元,设置为在根据虚拟定位数据对目标进行虚拟定位之前,响应于跟踪任务请求,根据虚拟定位数据确定第二摄像头集群中的定位校准摄像头;获取单元,设置为获取定位校准摄像头的帧图像缓存信息;第二生成单元,设置为基于帧图像缓存信息生成目标在定位校准摄像头的校准定位数据;修正单元,设置为根据校准定位数据修正目标定位数据。In an optional embodiment, the virtual positioning device further includes: a third determining unit, configured to determine the second camera according to the virtual positioning data in response to the tracking task request before performing virtual positioning on the target according to the virtual positioning data a positioning calibration camera in the cluster; an acquiring unit, set to obtain frame image cache information of the positioning calibration camera; a second generating unit, set to generate calibration positioning data of the target locating and calibrating the camera based on the frame image cache information; a correction unit, set to Correct the target positioning data according to the calibration positioning data.
在一种可选的实施例中,跟踪任务请求的启动条件包括以下之一:目标丢失、目标的判定准确度低于预定阈值、目标跟踪任务的预定计划。In an optional embodiment, the starting condition of the tracking task request includes one of the following: the target is lost, the determination accuracy of the target is lower than a predetermined threshold, and the predetermined plan of the target tracking task.
在一种可选的实施例中,该虚拟定位装置还包括:第四确定单元,设置为根据目标定位数据和/或虚拟定位数据,按照预定规则确定取景摄像头;发送单元,设置为将取景摄像头采集的帧图像信息发送至预定存储介质。In an optional embodiment, the virtual positioning device further includes: a fourth determining unit, configured to determine a viewfinder camera according to a predetermined rule according to the target positioning data and/or the virtual positioning data; a sending unit configured to determine the viewfinder camera according to a predetermined rule; The captured frame image information is sent to a predetermined storage medium.
根据本申请实施例的另外一个方面,还提供了一种计算机可读存储介质,计算机可读存储介质包括存储的计算机程序,其中,在计算机程序被处理器运行时控制计算机存储介质所在设备执行上述中任一项的虚拟定位方法。According to another aspect of the embodiments of the present application, a computer-readable storage medium is also provided, where the computer-readable storage medium includes a stored computer program, wherein when the computer program is run by the processor, the device where the computer storage medium is located is controlled to execute the above-mentioned The virtual positioning method of any one.
根据本申请实施例的另外一个方面,还提供了一种处理器,处理器用于运行计算机程序,其中,计算机程序运行时执行上述中任一项的虚拟定位方法。According to another aspect of the embodiments of the present application, a processor is also provided, where the processor is configured to run a computer program, wherein when the computer program runs, any one of the above virtual positioning methods is executed.
根据本申请实施例的另外一个方面,还提供了一种虚拟定位系统,该虚拟定位系统使用上述中任一项的虚拟定位方法。According to another aspect of the embodiments of the present application, a virtual positioning system is also provided, and the virtual positioning system uses any one of the above virtual positioning methods.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The above-mentioned serial numbers of the embodiments of the present application are only for description, and do not represent the advantages or disadvantages of the embodiments.
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above-mentioned embodiments of the present application, the description of each embodiment has its own emphasis. For parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed technical content can be implemented in other ways. The device embodiments described above are only illustrative, for example, the division of the units may be a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components may be combined or Integration into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage medium includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes .
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。The above are only the preferred embodiments of the present application. It should be pointed out that for those skilled in the art, without departing from the principles of the present application, several improvements and modifications can also be made. It should be regarded as the protection scope of this application.

Claims (25)

  1. 一种虚拟定位方法,包括:A virtual positioning method, comprising:
    获取目标在第一摄像头集群中的目标定位数据,其中,所述第一摄像头集群是定位系统中用于采用的摄像头,所述定位系统被用于执行目标跟踪任务以生成定位数据;obtaining target positioning data of the target in a first camera cluster, wherein the first camera cluster is a camera used in a positioning system used to perform a target tracking task to generate positioning data;
    将所述目标定位数据发送至所述定位系统,其中,所述定位系统利用目标映射模型和所述目标定位数据生成所述目标在第二摄像头集群中的虚拟定位数据,所述目标映射模型用于描述所述第一摄像头集群与所述第二摄像头集群在空间上的映射关系;The target positioning data is sent to the positioning system, wherein the positioning system uses the target mapping model and the target positioning data to generate virtual positioning data for the target in the second camera cluster, and the target mapping model uses for describing the spatial mapping relationship between the first camera cluster and the second camera cluster;
    根据所述虚拟定位数据对所述目标进行虚拟定位。Perform virtual positioning on the target according to the virtual positioning data.
  2. 根据权利要求1所述的方法,其中,所述目标映射模型为所述定位系统从映射模型系统获取的映射模型,其中,所述映射模型系统通过初始化建模的方式生成包含有预定应用场景内各摄像头位置和取景角度关系的映射模型。The method according to claim 1, wherein the target mapping model is a mapping model obtained by the positioning system from a mapping model system, wherein the mapping model system generates a mapping model that includes a predetermined application scenario by means of initialization modeling. The mapping model of the relationship between each camera position and viewing angle.
  3. 根据权利要求1所述的方法,其中,在根据所述虚拟定位数据对所述目标进行虚拟定位之前,还包括:对所述虚拟定位数据进行校验;The method according to claim 1, wherein before performing virtual positioning on the target according to the virtual positioning data, the method further comprises: verifying the virtual positioning data;
    其中,对所述虚拟定位数据进行校验,包括:Wherein, verifying the virtual positioning data includes:
    获取所述目标在所述第二摄像头集群中的实际定位数据;obtaining the actual positioning data of the target in the second camera cluster;
    利用预定校验规则确定所述实际定位数据与所述虚拟定位数据的相似度,以对所述虚拟定位数据与所述实际定位数据之前的一致性进行校验。The similarity between the actual positioning data and the virtual positioning data is determined by using a predetermined check rule, so as to check the previous consistency between the virtual positioning data and the actual positioning data.
  4. 根据权利要求1所述的方法,其中,还包括:The method of claim 1, further comprising:
    响应于接力跟踪请求信号,并根据所述虚拟定位数据确定所述第二摄像头集群中的最佳采样摄像头;determining the best sampling camera in the second camera cluster according to the virtual positioning data in response to the relay tracking request signal;
    将所述最佳采样摄像头的属性设置为第一摄像头集群,得到更新后的第一摄像头集群;Setting the attribute of the best sampling camera as the first camera cluster to obtain the updated first camera cluster;
    获取所述更新后的第一摄像头集群采集的图像信息流;acquiring the image information stream collected by the updated first camera cluster;
    将所述图像信息流发送至所述定位系统,其中,所述定位系统基于所述图像信息流生成所述目标在所述更新后的第一摄像头集群中的目标定位数据。Sending the image information stream to the positioning system, wherein the positioning system generates target positioning data of the target in the updated first camera cluster based on the image information stream.
  5. 根据权利要求4所述的方法,其中,响应于接力跟踪请求信号,包括:The method of claim 4, wherein in response to the relay tracking request signal, comprising:
    确定所述第一摄像头集群和所述第二摄像头集群所在预定应用场景中摄像头的分布密度;determining the distribution density of cameras in the predetermined application scenario where the first camera cluster and the second camera cluster are located;
    确定所述分布密度小于预定数值时,在所述目标位于所述第一摄像头集群取景区域的边缘预定位置时,响应于所述接力跟踪请求信号;或,When it is determined that the distribution density is less than a predetermined value, when the target is located at a predetermined position on the edge of the viewing area of the first camera cluster, in response to the relay tracking request signal; or,
    确定所述分布密度不小于预定数值时,在所述目标离开所述第一摄像头集群取景区域的中间位置时,响应于所述接力跟踪请求信号。When it is determined that the distribution density is not less than a predetermined value, when the target leaves the middle position of the viewing area of the first camera cluster, the relay tracking request signal is responded to.
  6. 根据权利要求5所述的方法,其中,响应于接力跟踪请求信号,包括:The method of claim 5, wherein in response to the relay tracking request signal, comprising:
    确定所述目标的定位准确度低于预定阈值;determining that the positioning accuracy of the target is lower than a predetermined threshold;
    响应于所述接力跟踪请求信号。in response to the relay tracking request signal.
  7. 根据权利要求1所述的方法,其中,在根据所述虚拟定位数据对所述目标进行虚拟定位之后,还包括:The method according to claim 1, wherein after performing virtual positioning on the target according to the virtual positioning data, the method further comprises:
    响应于标识匹配任务,确定所述第二摄像头集群中包含有所述目标的至少一个摄像头;In response to the identification matching task, determining that the second camera cluster includes at least one camera of the target;
    获取所述至少一个摄像头的帧图像信息;acquiring frame image information of the at least one camera;
    识别所述帧图像信息中的特征标识,并基于所述特征标识生成所述目标的目标标识信息。Identify the feature identification in the frame image information, and generate target identification information of the target based on the feature identification.
  8. 根据权利要求7所述的方法,其中,根据所述虚拟定位数据对所述目标进行虚拟定位,包括:The method according to claim 7, wherein the virtual positioning of the target according to the virtual positioning data comprises:
    响应于终端设备的目标查询指令,并获取所述目标查询指令的目标标识信息;In response to the target query instruction of the terminal device, and obtain the target identification information of the target query instruction;
    基于所述目标标识信息结合所述目标定位数据和/或所述虚拟定位数据,确定取景摄像头;determining a viewfinder camera based on the target identification information in combination with the target positioning data and/or the virtual positioning data;
    将所述取景摄像头采集的所述目标的视频流反馈至所述终端设备。The video stream of the target captured by the viewfinder camera is fed back to the terminal device.
  9. 根据权利要求1至8中任一项所述的方法,其中,在根据所述虚拟定位数据对所述目标进行虚拟定位之前,还包括:The method according to any one of claims 1 to 8, wherein before performing virtual positioning on the target according to the virtual positioning data, the method further comprises:
    响应于跟踪任务请求,根据所述虚拟定位数据确定所述第二摄像头集群中的定位校准摄像头;In response to the tracking task request, determining a positioning calibration camera in the second camera cluster according to the virtual positioning data;
    获取所述定位校准摄像头的帧图像缓存信息;acquiring the frame image buffer information of the positioning calibration camera;
    基于所述帧图像缓存信息生成所述目标在所述定位校准摄像头的校准定位数据;generating calibration positioning data of the target in the positioning calibration camera based on the frame image buffer information;
    根据所述校准定位数据修正所述目标定位数据。The target positioning data is corrected according to the calibration positioning data.
  10. 根据权利要求9所述的方法,其中,所述跟踪任务请求的启动条件包括以下之一:所述目标丢失、所述目标的判定准确度低于预定阈值、所述目标跟踪任务的预定计划。The method according to claim 9, wherein the starting condition of the tracking task request comprises one of the following: the target is lost, the determination accuracy of the target is lower than a predetermined threshold, and the predetermined plan of the target tracking task.
  11. 根据权利要求9所述的方法,其中,还包括:The method of claim 9, further comprising:
    根据所述目标定位数据和/或所述虚拟定位数据,按照预定规则确定取景摄像头;According to the target positioning data and/or the virtual positioning data, the viewfinder camera is determined according to a predetermined rule;
    将所述取景摄像头采集的帧图像信息发送至预定存储介质。The frame image information collected by the viewfinder camera is sent to a predetermined storage medium.
  12. 一种虚拟定位装置,包括:A virtual positioning device, comprising:
    获取单元,设置为获取目标在第一摄像头集群中的目标定位数据,其中,所述第一摄像头集群是定位系统中用于采用的摄像头,所述定位系统被用于执行目标跟踪任务以生成定位数据;an acquisition unit configured to acquire target positioning data of a target in a first camera cluster, wherein the first camera cluster is a camera used in a positioning system, and the positioning system is used to perform a target tracking task to generate positioning data;
    发送单元,设置为将所述目标定位数据发送至所述定位系统,其中,所述定位系统利用目标映射模型和所述目标定位数据生成所述目标在第二摄像头集群中的虚拟定位数据,所述目标映射模型用于描述所述第一摄像头集群与所述第二摄像头集群在空间上的映射关系;A sending unit, configured to send the target positioning data to the positioning system, wherein the positioning system uses the target mapping model and the target positioning data to generate virtual positioning data of the target in the second camera cluster, so The target mapping model is used to describe the mapping relationship between the first camera cluster and the second camera cluster in space;
    虚拟定位单元,设置为根据所述虚拟定位数据对所述目标进行虚拟定位。A virtual positioning unit, configured to perform virtual positioning on the target according to the virtual positioning data.
  13. 根据权利要求12所述的装置,其中,所述目标映射模型为所述定位系统从映射模型系统获取的映射模型,其中,所述映射模型系统通过初始化建模的方式生成包含有预定应用场景内各摄像头位置和取景角度关系的映射模型。The device according to claim 12, wherein the target mapping model is a mapping model obtained by the positioning system from a mapping model system, wherein the mapping model system generates a mapping model that includes a predetermined application scenario by means of initialization modeling The mapping model of the relationship between each camera position and viewing angle.
  14. 根据权利要求12所述的装置,其中,还包括:校验单元,设置为在根据所述虚拟定位数据对所述目标进行虚拟定位之前,对所述虚拟定位数据进行校验;The apparatus according to claim 12, further comprising: a verification unit, configured to verify the virtual positioning data before performing virtual positioning on the target according to the virtual positioning data;
    其中,所述校验单元,包括:Wherein, the verification unit includes:
    第一获取模块,设置为获取所述目标在所述第二摄像头集群中的实际定位数据;a first acquisition module, configured to acquire actual positioning data of the target in the second camera cluster;
    校验模块,设置为利用预定校验规则确定所述实际定位数据与所述虚拟定位数据的相似度,以对所述虚拟定位数据与所述实际定位数据之前的一致性进行校验。A verification module configured to determine the similarity between the actual positioning data and the virtual positioning data by using a predetermined verification rule, so as to verify the previous consistency between the virtual positioning data and the actual positioning data.
  15. 根据权利要求12所述的装置,其中,还包括:The apparatus of claim 12, further comprising:
    第一确定单元,设置为响应于接力跟踪请求信号,并根据所述虚拟定位数据确定所述第二摄像头集群中的最佳采样摄像头;a first determining unit, configured to respond to a relay tracking request signal and determine the best sampling camera in the second camera cluster according to the virtual positioning data;
    设置单元,设置为将所述最佳采样摄像头的属性设置为第一摄像头集群,得到更新后的第一摄像头集群;a setting unit, configured to set the attribute of the best sampling camera as the first camera cluster, to obtain the updated first camera cluster;
    所述获取单元,设置为获取所述更新后的第一摄像头集群采集的图像信息流;the obtaining unit, configured to obtain the image information stream collected by the updated first camera cluster;
    所述发送单元,设置为将所述图像信息流发送至所述定位系统,其中,所述定位系统基于所述图像信息流生成所述目标在所述更新后的第一摄像头集群中的目标定位数据。The sending unit is configured to send the image information flow to the positioning system, wherein the positioning system generates the target positioning of the target in the updated first camera cluster based on the image information flow data.
  16. 根据权利要求15所述的装置,其中,所述第一确定单元,包括:The apparatus according to claim 15, wherein the first determining unit comprises:
    第一确定模块,设置为确定所述第一摄像头集群和所述第二摄像头集群所在预定应用场景中摄像头的分布密度;a first determining module, configured to determine the distribution density of cameras in a predetermined application scenario where the first camera cluster and the second camera cluster are located;
    第二确定模块,设置为确定所述分布密度小于预定数值时,在所述目标位于所述第一摄像头集群取景区域的边缘预定位置时,响应于所述接力跟踪请求信号;或,The second determination module is configured to respond to the relay tracking request signal when it is determined that the distribution density is less than a predetermined value when the target is located at a predetermined position on the edge of the viewing area of the first camera cluster; or,
    第三确定模块,设置为确定所述分布密度不小于预定数值时,在所述目标离开所述第一摄像头集群取景区域的中间位置时,响应于所述接力跟踪请求信号。The third determining module is configured to respond to the relay tracking request signal when the target leaves the middle position of the viewing area of the first camera cluster when it is determined that the distribution density is not less than a predetermined value.
  17. 根据权利要求16所述的装置,其中,所述第一确定单元,包括:The apparatus according to claim 16, wherein the first determining unit comprises:
    第四确定模块,设置为确定所述目标的定位准确度低于预定阈值;a fourth determining module, configured to determine that the positioning accuracy of the target is lower than a predetermined threshold;
    响应模块,设置为响应于所述接力跟踪请求信号。A response module configured to respond to the relay tracking request signal.
  18. 根据权利要求12所述的装置,其中,还包括:The apparatus of claim 12, further comprising:
    第二确定单元,设置为在根据所述虚拟定位数据对所述目标进行虚拟定位之后,响应于标识匹配任务,确定所述第二摄像头集群中包含有所述目标的至少一个摄像头;a second determining unit, configured to determine that at least one camera of the target is included in the second camera cluster in response to an identification matching task after performing virtual positioning on the target according to the virtual positioning data;
    所述获取单元,设置为获取所述至少一个摄像头的帧图像信息;The acquisition unit is configured to acquire frame image information of the at least one camera;
    第一生成单元,设置为识别所述帧图像信息中的特征标识,并基于所述特征标识生成所述目标的目标标识信息。The first generating unit is configured to recognize a feature identifier in the frame image information, and generate target identifier information of the target based on the feature identifier.
  19. 根据权利要求18所述的装置,其中,所述虚拟定位单元,包括:The apparatus according to claim 18, wherein the virtual positioning unit comprises:
    第二获取模块,设置为响应于终端设备的目标查询指令,并获取所述目标查询指令的目标标识信息;A second acquiring module, configured to respond to a target query instruction of the terminal device, and acquire target identification information of the target query instruction;
    第五确定模块,设置为基于所述目标标识信息结合所述目标定位数据和/或所述虚拟定位数据,确定取景摄像头;a fifth determination module, configured to determine a viewfinder camera based on the target identification information in combination with the target positioning data and/or the virtual positioning data;
    反馈模块,设置为将所述取景摄像头采集的所述目标的视频流反馈至所述终端设备。The feedback module is configured to feed back the video stream of the target collected by the viewfinder camera to the terminal device.
  20. 根据权利要求12至19中任一项所述的装置,其中,还包括:The apparatus of any one of claims 12 to 19, further comprising:
    第三确定单元,设置为在根据所述虚拟定位数据对所述目标进行虚拟定位之前,响应于跟踪任务请求,根据所述虚拟定位数据确定所述第二摄像头集群中的定位校准摄像头;a third determining unit, configured to, in response to a tracking task request, determine a positioning calibration camera in the second camera cluster according to the virtual positioning data before performing virtual positioning on the target according to the virtual positioning data;
    所述获取单元,设置为获取所述定位校准摄像头的帧图像缓存信息;The obtaining unit is configured to obtain the frame image buffer information of the positioning calibration camera;
    第二生成单元,设置为基于所述帧图像缓存信息生成所述目标在所述定位校准摄像头的校准定位数据;a second generating unit, configured to generate calibration positioning data of the target in the positioning calibration camera based on the frame image buffer information;
    修正单元,设置为根据所述校准定位数据修正所述目标定位数据。A correction unit, configured to correct the target positioning data according to the calibration positioning data.
  21. 根据权利要求20所述的装置,其中,所述跟踪任务请求的启动条件包括以下之一:所述目标丢失、所述目标的判定准确度低于预定阈值、所述目标跟踪任务的预定计划。The apparatus according to claim 20, wherein the starting condition of the tracking task request comprises one of the following: the target is lost, the determination accuracy of the target is lower than a predetermined threshold, and the predetermined plan of the target tracking task.
  22. 根据权利要求20所述的装置,其中,还包括:The apparatus of claim 20, further comprising:
    第四确定单元,设置为根据所述目标定位数据和/或所述虚拟定位数据,按照预定规则确定取景摄像头;a fourth determining unit, configured to determine a viewfinder camera according to a predetermined rule according to the target positioning data and/or the virtual positioning data;
    所述发送单元,设置为将所述取景摄像头采集的帧图像信息发送至预定存储介质。The sending unit is configured to send the frame image information collected by the viewfinder camera to a predetermined storage medium.
  23. 一种虚拟定位系统,使用上述权利要求1至11中任一项所述的虚拟定位方法。A virtual positioning system using the virtual positioning method described in any one of the above claims 1 to 11.
  24. 一种计算机可读存储介质,所述计算机可读存储介质包括存储的计算机程序,其中,在所述计算机程序被处理器运行时控制所述计算机存储介质所在设备执行权 利要求1至11中任一项所述的虚拟定位方法。A computer-readable storage medium, the computer-readable storage medium comprising a stored computer program, wherein when the computer program is run by a processor, a device where the computer storage medium is located is controlled to execute any one of claims 1 to 11 The virtual positioning method described in item.
  25. 一种处理器,所述处理器用于运行计算机程序,其中,所述计算机程序运行时执行权利要求1至11中任一项所述的虚拟定位方法。A processor for running a computer program, wherein the computer program executes the virtual positioning method according to any one of claims 1 to 11 when the computer program runs.
PCT/CN2021/138525 2020-12-21 2021-12-15 Virtual positioning method and apparatus, and virtual positioning system WO2022135242A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011522213.2 2020-12-21
CN202011522213.2A CN114648572A (en) 2020-12-21 2020-12-21 Virtual positioning method and device and virtual positioning system

Publications (1)

Publication Number Publication Date
WO2022135242A1 true WO2022135242A1 (en) 2022-06-30

Family

ID=81992014

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/138525 WO2022135242A1 (en) 2020-12-21 2021-12-15 Virtual positioning method and apparatus, and virtual positioning system

Country Status (2)

Country Link
CN (1) CN114648572A (en)
WO (1) WO2022135242A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115955550A (en) * 2023-03-15 2023-04-11 浙江宇视科技有限公司 Image analysis method and system of GPU (graphics processing Unit) cluster

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107632703A (en) * 2017-09-01 2018-01-26 广州励丰文化科技股份有限公司 Mixed reality audio control method and service equipment based on binocular camera
CN109993834A (en) * 2017-12-30 2019-07-09 深圳多哚新技术有限责任公司 Localization method and device of the target object in Virtual Space
CN209147987U (en) * 2019-01-11 2019-07-23 广州艾目易科技有限公司 A kind of binocular vision optical orientator
CN110213488A (en) * 2019-06-06 2019-09-06 腾讯科技(深圳)有限公司 A kind of localization method and relevant device
WO2020234559A1 (en) * 2019-05-22 2020-11-26 Sony Interactive Entertainment Inc. Data processing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107632703A (en) * 2017-09-01 2018-01-26 广州励丰文化科技股份有限公司 Mixed reality audio control method and service equipment based on binocular camera
CN109993834A (en) * 2017-12-30 2019-07-09 深圳多哚新技术有限责任公司 Localization method and device of the target object in Virtual Space
CN209147987U (en) * 2019-01-11 2019-07-23 广州艾目易科技有限公司 A kind of binocular vision optical orientator
WO2020234559A1 (en) * 2019-05-22 2020-11-26 Sony Interactive Entertainment Inc. Data processing
CN110213488A (en) * 2019-06-06 2019-09-06 腾讯科技(深圳)有限公司 A kind of localization method and relevant device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115955550A (en) * 2023-03-15 2023-04-11 浙江宇视科技有限公司 Image analysis method and system of GPU (graphics processing Unit) cluster

Also Published As

Publication number Publication date
CN114648572A (en) 2022-06-21

Similar Documents

Publication Publication Date Title
US11941887B2 (en) Scenario recreation through object detection and 3D visualization in a multi-sensor environment
JP6548690B2 (en) Simulation system, simulation program and simulation method
Grassi et al. Parkmaster: An in-vehicle, edge-based video analytics service for detecting open parking spaces in urban environments
Foresti et al. Active video-based surveillance system: the low-level image and video processing techniques needed for implementation
CN109644255B (en) Method and apparatus for annotating a video stream comprising a set of frames
CN104303193B (en) Target classification based on cluster
US7321386B2 (en) Robust stereo-driven video-based surveillance
CN110799982A (en) Method and system for object-centric stereo vision in an autonomous vehicle
US20130208948A1 (en) Tracking and identification of a moving object from a moving sensor using a 3d model
CN104902258A (en) Multi-scene pedestrian volume counting method and system based on stereoscopic vision and binocular camera
GB2520338A (en) Automatic scene parsing
CN104378582A (en) Intelligent video analysis system and method based on PTZ video camera cruising
CN111241932A (en) Automobile exhibition room passenger flow detection and analysis system, method and storage medium
CN108509912A (en) Multipath network video stream licence plate recognition method and system
KR101678004B1 (en) node-link based camera network monitoring system and method of monitoring the same
AU2021255130B2 (en) Artificial intelligence and computer vision powered driving-performance assessment
CN102609724A (en) Method for prompting ambient environment information by using two cameras
US20220120910A1 (en) Information processing system, sensor system, information processing method, and non-transitory computer readable storage medium
WO2022135242A1 (en) Virtual positioning method and apparatus, and virtual positioning system
Kachach et al. Hybrid three-dimensional and support vector machine approach for automatic vehicle tracking and classification using a single camera
KR20170006356A (en) Method for customer analysis based on two-dimension video and apparatus for the same
KR102189926B1 (en) Method and system for detecting change point of interest
US20230394686A1 (en) Object Identification
Ren et al. Multi-view and multi-plane data fusion for effective pedestrian detection in intelligent visual surveillance
CN103903269B (en) The description method and system of ball machine monitor video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21909238

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.11.2023)