CN113051968A - Violent sorting behavior identification method and device and computer readable storage medium - Google Patents

Violent sorting behavior identification method and device and computer readable storage medium Download PDF

Info

Publication number
CN113051968A
CN113051968A CN201911369379.2A CN201911369379A CN113051968A CN 113051968 A CN113051968 A CN 113051968A CN 201911369379 A CN201911369379 A CN 201911369379A CN 113051968 A CN113051968 A CN 113051968A
Authority
CN
China
Prior art keywords
coordinate system
parabolic
pose
camera
paraboloid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911369379.2A
Other languages
Chinese (zh)
Other versions
CN113051968B (en
Inventor
杨小平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SF Technology Co Ltd
Original Assignee
SF Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SF Technology Co Ltd filed Critical SF Technology Co Ltd
Priority to CN201911369379.2A priority Critical patent/CN113051968B/en
Publication of CN113051968A publication Critical patent/CN113051968A/en
Application granted granted Critical
Publication of CN113051968B publication Critical patent/CN113051968B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The embodiment of the application discloses a violent sorting behavior identification method, a violent sorting behavior identification device and a computer readable storage medium, wherein the method comprises the following steps: acquiring a parabolic sorting video; acquiring the pose of the entity camera under a preset coordinate system; acquiring position information of a parabolic track under an entity camera coordinate system and a pose of a paraboloid under a preset coordinate system based on a plurality of images of a parabolic sorting video; calculating the pose of the virtual camera in the preset coordinate system based on the pose of the paraboloid in the preset coordinate system; correcting the position information of the parabolic track in the coordinate system of the entity camera based on the pose of the entity camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system and the pose of the paraboloid in the preset coordinate system to obtain the position information of the parabolic track in the coordinate system of the virtual camera; violent sorting behavior is identified based on position information of the parabolic trajectory in a virtual camera coordinate system. The accuracy of violence letter sorting action discernment can be improved to this application.

Description

Violent sorting behavior identification method and device and computer readable storage medium
Technical Field
The application relates to the technical field of computer vision, in particular to a violent sorting behavior identification method and device and a computer readable storage medium.
Background
In the logistics industry, violent sorting behaviors not only affect the image of a company, but also cause great economic loss to the company. The existing identification basically depends on monitoring by manual inspection of monitoring videos and other modes, and has great subjectivity. A great deal of labor cost is required to be invested, and only sampling inspection can be performed. And since the camera may not shoot the parabolic trajectory right side, the parabolic trajectory shot obliquely is obtained. The obliquely shot parabolic track is different from the real parabolic track, so that the violent sorting behavior is identified very inaccurately according to the obliquely shot parabolic track.
That is, in the prior art, the accuracy is not high when the violent sorting behavior is identified by relying on the distorted parabolic track.
Disclosure of Invention
The embodiment of the application provides a violent sorting behavior identification method and device and a computer readable storage medium, aiming at solving the problem of how to correct a distorted parabolic track to identify violent sorting behavior, so that the accuracy of violent sorting behavior identification is improved.
In a first aspect, the present application provides a violent sorting behavior identification method, including:
acquiring a parabolic sorting video;
acquiring the pose of the entity camera under a preset coordinate system;
acquiring position information of a parabolic track under an entity camera coordinate system and a pose of a paraboloid under a preset coordinate system based on a plurality of images of the parabolic sorting video, wherein the paraboloid is a plane where the parabolic track is located;
calculating the pose of a virtual camera in a preset coordinate system based on the pose of the paraboloid in the preset coordinate system, and determining that the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the entity camera and the paraboloid;
correcting the position information of the parabolic track in the coordinate system of the physical camera based on the pose of the physical camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system and the pose of the paraboloid in the preset coordinate system to obtain the position information of the parabolic track in the coordinate system of the virtual camera;
identifying violent sorting behavior based on the position information of the parabolic track in the virtual camera coordinate system.
Wherein the obtaining a parabolic sorting video comprises:
acquiring a sorting video;
segmenting the sorting video to obtain a plurality of sorting sub-videos;
performing image detection on images in the sorted sub-videos to extract a parabolic sorted video from the plurality of sorted sub-videos.
Wherein the acquiring position information of the parabolic track in the entity camera coordinate system and the pose of the parabolic track in the preset coordinate system based on the plurality of images of the parabolic sorting video comprises:
acquiring a throwing area;
intercepting a plurality of images in the parabolic sorting video based on the throwing area to obtain a plurality of intercepted images;
performing image detection on the plurality of intercepted images to acquire position information of the parabolic track under a coordinate system of an entity camera;
and acquiring the pose of the paraboloid under a preset coordinate system.
The method for acquiring the pose of the paraboloid under the preset coordinate system comprises the following steps:
acquiring a preset image from the plurality of intercepted images;
drawing the parabolic track on the preset image based on the position information of the parabolic track in an entity camera coordinate system to obtain the parabolic image, wherein the parabolic image comprises the parabolic track and a shooting background image;
and performing regression analysis on the parabolic image to obtain the pose of the paraboloid under a preset coordinate system.
Wherein the capture throw area comprises:
carrying out image fusion on a plurality of images in the parabolic sorting video to obtain a fused image;
and carrying out image detection on the fused image to acquire the throwing area.
Wherein the correcting the position information of the parabolic track in the coordinate system of the physical camera based on the pose of the physical camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system and the pose of the parabolic track in the preset coordinate system to obtain the position information of the parabolic track in the coordinate system of the virtual camera comprises:
calculating a first pose transformation relation between the coordinate system of the physical camera and the preset coordinate system based on the pose of the physical camera in the preset coordinate system;
calculating a second position and posture conversion relation between the paraboloid coordinate system of the paraboloid and the preset coordinate system based on the position and posture of the paraboloid under the preset coordinate system;
calculating a third pose transformation relation between the virtual camera coordinate system and the preset coordinate system based on the pose of the virtual camera in the preset coordinate system and the pose of the virtual camera in the preset coordinate system;
determining a fourth pose transformation relationship between the physical camera coordinate system and the virtual camera coordinate system based on the first pose transformation relationship, the second pose transformation relationship and the third pose transformation relationship;
and correcting the position information of the parabolic track in the physical camera coordinate system based on the fourth pose conversion relation to obtain the position information of the parabolic track in the virtual camera coordinate system.
Wherein the identifying violent sorting behavior based on the position information of the parabolic track in the virtual camera coordinate system comprises:
acquiring a starting point coordinate and an end point coordinate of the parabolic track in the virtual camera coordinate system based on the position information of the parabolic track in the virtual camera coordinate system;
determining a parabolic distance based on a start point coordinate and an end point coordinate of the parabolic track in the virtual camera coordinate system;
violent sorting behavior is identified based on the parabolic distance.
In a second aspect, the present application provides a violent sorting behavior recognition apparatus, comprising:
the first acquisition unit is used for acquiring a parabolic sorting video;
the second acquisition unit is used for acquiring the pose of the entity camera under a preset coordinate system;
a third obtaining unit, configured to obtain, based on a plurality of images of the parabolic sorting video, position information of a parabolic track in an entity camera coordinate system and a pose of a parabolic surface in a preset coordinate system, where the parabolic surface is a plane where the parabolic track is located;
the pose calculation unit is used for calculating the pose of the virtual camera in a preset coordinate system based on the pose of the paraboloid in the preset coordinate system and determining that the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the entity camera and the paraboloid;
the correction unit is used for correcting the position information of the parabolic track in the coordinate system of the physical camera based on the pose of the physical camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system and the pose of the paraboloid in the preset coordinate system to obtain the position information of the parabolic track in the coordinate system of the virtual camera;
an identification unit for identifying violent sorting behavior based on the position information of the parabolic track in the virtual camera coordinate system.
The first obtaining unit is further used for obtaining a sorting video;
segmenting the sorting video to obtain a plurality of sorting sub-videos;
performing image detection on images in the sorted sub-videos to extract a parabolic sorted video from the plurality of sorted sub-videos.
The third acquisition unit is also used for acquiring a throwing area;
intercepting a plurality of images in the parabolic sorting video based on the throwing area to obtain a plurality of intercepted images;
performing image detection on the plurality of intercepted images to acquire position information of the parabolic track under a coordinate system of an entity camera;
and acquiring the pose of the paraboloid under a preset coordinate system.
The third obtaining unit is further configured to obtain a preset image from the plurality of captured images;
drawing the parabolic track on the preset image based on the position information of the parabolic track in an entity camera coordinate system to obtain the parabolic image, wherein the parabolic image comprises the parabolic track and a shooting background image;
and performing regression analysis on the parabolic image to obtain the pose of the paraboloid under a preset coordinate system.
The third obtaining unit is further configured to perform image fusion on the plurality of images in the parabolic sorting video to obtain a fused image;
and carrying out image detection on the fused image to acquire the throwing area.
The correction unit is further used for calculating a first pose transformation relation between the entity camera coordinate system and the preset coordinate system based on the pose of the entity camera in the preset coordinate system;
calculating a second position and posture conversion relation between the paraboloid coordinate system of the paraboloid and the preset coordinate system based on the position and posture of the paraboloid under the preset coordinate system;
calculating a third pose transformation relation between the virtual camera coordinate system and the preset coordinate system based on the pose of the virtual camera in the preset coordinate system and the pose of the virtual camera in the preset coordinate system;
determining a fourth pose transformation relationship between the physical camera coordinate system and the virtual camera coordinate system based on the first pose transformation relationship, the second pose transformation relationship and the third pose transformation relationship;
and correcting the position information of the parabolic track in the physical camera coordinate system based on the fourth pose conversion relation to obtain the position information of the parabolic track in the virtual camera coordinate system.
The identification unit is further configured to acquire a start point coordinate and an end point coordinate of the parabolic track in the virtual camera coordinate system based on the position information of the parabolic track in the virtual camera coordinate system;
determining a parabolic distance based on a start point coordinate and an end point coordinate of the parabolic track in the virtual camera coordinate system;
violent sorting behavior is identified based on the parabolic distance.
In a third aspect, the present application provides an electronic device, comprising:
one or more processors;
a memory; and
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the violent sorting behavior identification method of any of the first aspects.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which is loaded by a processor to perform the steps of the method for identifying violent sorting behavior of any one of the first aspect.
The violent sorting behavior identification method in the application calculates the pose of the virtual camera in the preset coordinate system based on the pose of the paraboloid in the preset coordinate system, corrects the position information of the parabolic track in the entity camera coordinate system based on the pose of the entity camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system and the pose of the paraboloid in the preset coordinate system, obtains the position information of the parabolic track in the virtual camera coordinate system, and identifies violent sorting behavior according to the position information of the parabolic track in the virtual camera coordinate system. Because the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the entity camera and the paraboloid, the parabolic track shot by the virtual camera can be closer to the real parabolic track than the parabolic track shot by the entity camera, after the position information of the parabolic track in the coordinate system of the entity camera is corrected into the position information of the parabolic track in the coordinate system of the virtual camera, the obtained parabolic track is closer to the real parabolic track, and the accuracy of violent sorting behavior identification can be further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a scenario of an express mail sorting system provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of an embodiment of a violent sorting behavior identification method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a scenario of the violent sorting behavior recognition method of FIG. 2;
fig. 4 is a schematic flow chart of S21 in the violent sorting behavior recognition method of fig. 2;
fig. 5 is a schematic flow chart of S23 in the violent sorting behavior recognition method of fig. 2;
fig. 6 is a schematic flow chart of S25 in the violent sorting behavior recognition method of fig. 2;
fig. 7 is a schematic structural diagram of an embodiment of a violent sorting behavior recognition apparatus provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of an embodiment of an electronic device provided in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the referenced device or element must have a particular orientation, be constructed in a particular orientation, and be operated, and thus should not be considered as limiting the present application. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more features. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In this application, the word "exemplary" is used to mean "serving as an example, instance, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for the purpose of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known structures and processes are not set forth in detail in order to avoid obscuring the description of the present application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
The embodiment of the application provides a violent sorting behavior identification method and device, electronic equipment and a storage medium. The following are detailed below.
Referring to fig. 1, fig. 1 is a schematic view of a scenario of an express sorting system according to an embodiment of the present application, where the express sorting system may include an electronic device 100, and a violent sorting behavior recognition apparatus, such as the electronic device in fig. 1, is integrated in the electronic device 100.
In the embodiment of the present application, the electronic device 100 is mainly used for acquiring a parabolic sorting video; acquiring the pose of the entity camera under a preset coordinate system; acquiring position information of a parabolic track under an entity camera coordinate system and a pose of a paraboloid under a preset coordinate system based on a plurality of images of a parabolic sorting video, wherein the paraboloid is a plane where the parabolic track is located; calculating the pose of the virtual camera in the preset coordinate system based on the pose of the paraboloid in the preset coordinate system, wherein the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the entity camera and the paraboloid; correcting the position information of the parabolic track in the coordinate system of the entity camera based on the pose of the entity camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system and the pose of the paraboloid in the preset coordinate system to obtain the position information of the parabolic track in the coordinate system of the virtual camera; violent sorting behavior is identified based on position information of the parabolic trajectory in a virtual camera coordinate system.
In this embodiment of the application, the electronic device 100 may be an independent server, or may be a server network or a server cluster composed of servers, for example, the electronic device 100 described in this embodiment of the application includes, but is not limited to, a computer, a network host, a single network server, multiple network server sets, or a cloud server composed of multiple servers. Among them, the Cloud server is constituted by a large number of computers or web servers based on Cloud Computing (Cloud Computing).
Those skilled in the art will understand that the application environment shown in fig. 1 is only one application scenario related to the present application, and does not constitute a limitation on the application scenario of the present application, and that other application environments may further include more or less electronic devices than those shown in fig. 1, for example, only 1 electronic device is shown in fig. 1, and it is understood that the express sorting system may further include one or more other services, which are not limited herein.
In addition, as shown in fig. 1, the express sorting system may further include a physical camera 200 for acquiring video data. The entity camera 200 may be installed at a preset position of the sorting site so as to photograph the sorting site to obtain video data.
In other embodiments, the electronic device 100 may be directly the physical camera 200, the physical camera 200 having the computing functions of the electronic device and the functions of acquiring video.
It should be noted that the scenario diagram of the express mail sorting system shown in fig. 1 is merely an example, and the express mail sorting system and the scenario described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application.
First, an embodiment of the present application provides a violent sorting behavior identification method, where the violent sorting behavior identification method includes:
acquiring a parabolic sorting video; acquiring the pose of the entity camera under a preset coordinate system; acquiring position information of a parabolic track under an entity camera coordinate system and a pose of a paraboloid under a preset coordinate system based on a plurality of images of a parabolic sorting video, wherein the paraboloid is a plane where the parabolic track is located; calculating the pose of the virtual camera in the preset coordinate system based on the pose of the paraboloid in the preset coordinate system, wherein the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the entity camera and the paraboloid; correcting the position information of the parabolic track in the coordinate system of the entity camera based on the pose of the entity camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system and the pose of the paraboloid in the preset coordinate system to obtain the position information of the parabolic track in the coordinate system of the virtual camera; violent sorting behavior is identified based on position information of the parabolic trajectory in a virtual camera coordinate system.
Referring to fig. 2 and fig. 3, fig. 2 is a schematic flow chart illustrating an embodiment of a violent sorting behavior identification method in the embodiment of the present application; fig. 3 is a scene schematic diagram of the violent sorting behavior recognition method in fig. 2.
With reference to fig. 2 and 3, the violent sorting behavior identification method includes:
and S21, acquiring the parabolic sorting video.
In the embodiment of the present application, referring to fig. 4 in particular, fig. 4 is a schematic flow chart of S21 in the violent sorting behavior identification method of fig. 2.
As shown in fig. 4, S21 includes S211, S212, and S213. The method comprises the following specific steps:
s211, acquiring a sorting video.
Wherein, entity camera 200 installs in the preset position in letter sorting place to shoot the letter sorting place, obtain the letter sorting video. The sorted video shot by the entity camera 200 may be directly sent to the electronic device, or may be stored in a memory, and the electronic device obtains the sorted video by reading data from the memory. The sorting video shot by the entity camera 200 can be directly sent to the electronic equipment for processing, and the instantaneity of violent sorting behavior recognition can be realized.
S212, segmenting the sorting videos to obtain a plurality of sorting sub-videos.
Specifically, the sorting video is segmented according to preset time length. The preset time period may be 2s, 1s, etc. Therefore, the sorting video is segmented, and a plurality of sorting sub-videos can be obtained.
And S213, carrying out image detection on the images in the sorting sub-videos so as to extract the parabolic sorting video from the plurality of sorting sub-videos.
In a specific embodiment, image detection is performed on two images of the sorting sub-video, whether the position of the express item in the two images changes is determined, if yes, the sorting sub-video is determined to be a parabolic sorting video, and the sorting sub-video is obtained.
And S22, acquiring the pose of the entity camera in a preset coordinate system.
In the embodiment of the application, the pose comprises position information and posture information. The position information is a three-dimensional coordinate of the object in a preset coordinate system, and the attitude information comprises a pitch angle, a roll angle and a pitch angle of the object in the preset coordinate system. The preset coordinate system may be a world coordinate system. And establishing a preset coordinate system, and acquiring the pose of the physical camera 200 in the preset coordinate system.
In the embodiment of the present application, regression analysis is performed on a plurality of images of the parabolic sorting video to obtain the pose of the entity camera 200 in the preset coordinate system. Of course, in other embodiments, the pre-stored pose of the physical camera 200 in the preset coordinate system may also be read, which is not limited in this application.
Specifically, the preset coordinate system is established by taking the distance h directly below the physical camera as the origin O1 of the preset coordinate system, the vertical direction as the Y axis of the preset coordinate system, and the horizontal plane as the XOZ plane of the preset coordinate system. The origin of the physical camera coordinate system is O2. So that the pose of the physical camera 200 in the preset coordinate system can be acquired. For example, the coordinates of the physical camera 200 in the preset coordinate system are (0, h, 0), and the posture of the physical camera 200 in the preset coordinate system is: angle of pitch theta1. Of course, the preset coordinate system may also be established according to specific requirements, which is not limited in this application.
S23, acquiring position information of a parabolic track in an entity camera coordinate system and a pose of a paraboloid in a preset coordinate system based on a plurality of images of the parabolic sorting video, wherein the paraboloid is a plane where the parabolic track is located.
In the embodiment of the application, a parabolic image is obtained based on a plurality of images in a parabolic sorting video, wherein the parabolic image comprises a parabolic track and a shooting background image; and performing regression analysis on the parabolic image to obtain the pose of the paraboloid 28 in a preset coordinate system.
Referring to fig. 5, fig. 5 is a schematic flow chart of S23 in the violent sorting behavior recognition method of fig. 2.
As shown in fig. 5, in the embodiment of the present application, S23 includes S231, S232, S233, and S234. The method comprises the following specific steps:
and S231, acquiring a throwing area.
In the embodiment of the application, a plurality of images in a parabolic sorting video are subjected to image fusion to obtain a fused image; and carrying out image detection on the fused image to acquire the throwing area.
In a specific embodiment, all frame images in the parabolic sorting video are extracted, and the pixel values of all the extracted frame images are overlapped to obtain a fused image. In another specific embodiment, partial frame images in the parabolic sorted video are extracted, for example, only odd frame images are extracted, and pixel value superposition is performed to obtain a fused image, so that the calculation efficiency can be improved. In other embodiments, other manners may be used to perform image fusion on multiple images in the parabolic sorting video, which is not limited in this application.
In a specific embodiment, the fused image is image-inspected through the yolo network to obtain the throw area. The yolo network completes the input of the original image to the output of the position and the type of the object based on a single end-to-end network, and the input image can determine the object area. The yolo network is a convolutional neural network capable of predicting the positions and the classes of a plurality of objects at one time, can realize end-to-end target detection and identification, and has the greatest advantage of high speed. Of course, in other embodiments, the target detection model may be Fast R-CNN, etc., and the present application is not limited thereto.
S232, intercepting a plurality of images in the parabolic sorting video based on the throwing area to obtain a plurality of intercepted images.
In the embodiment of the application, the non-fused multi-frame images in the parabolic sorting video are intercepted based on the throwing area to obtain a plurality of intercepted images. Therefore, each captured image has a shooting background image and a throwing object corresponding to the shooting time.
And S233, carrying out image detection on the plurality of intercepted images to acquire position information of the parabolic track in the coordinate system of the physical camera.
In the embodiment of the application, the plurality of captured images are subjected to image detection through the deep neural network so as to obtain the position information of the parabolic track under the coordinate system of the entity camera. Each captured image is provided with a shooting background image and a throwing object corresponding to the shooting time, so that the position information of the throwing object at different times can be obtained, and the position information of the parabolic track under the coordinate system of the physical camera can be obtained.
And S234, acquiring the pose of the paraboloid under a preset coordinate system.
In the embodiment of the application, a parabolic track is drawn on a preset image of a plurality of intercepted images based on position information of the parabolic track in an entity camera coordinate system to obtain a parabolic image, wherein the parabolic image comprises the parabolic track and a shooting background image.
In a specific embodiment, the preset image is a first frame image of the plurality of truncated images. And drawing a parabolic track on a preset image of the plurality of intercepted images based on the coordinates of the parabolic track in the physical camera coordinate system to obtain a parabolic image. Because the preset image is the original image of the sorting video, the obtained parabolic image comprises a parabolic track and a shooting background image, and the parabolic image is subjected to regression analysis by using the depth neural network model to obtain the pose of the paraboloid 28 in the preset coordinate system. For example, the pose of the paraboloid 28 is: the coordinate of the intersection point of the paraboloid 28 and the Z axis of the preset coordinate system is dnThe included angle between the paraboloid 28 and the XOY plane of the predetermined coordinate system is theta2
And S24, calculating the pose of the virtual camera in the preset coordinate system based on the pose of the paraboloid in the preset coordinate system, wherein the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the physical camera and the paraboloid.
In the embodiment of the application, based on the pose of the entity camera in the preset coordinate system and the pose of the paraboloid in the preset coordinate system, the included angle between the optical axis of the entity camera and the paraboloid is calculated, whether the included angle between the optical axis of the entity camera and the paraboloid is smaller than a preset angle value or not is judged, if yes, the deviation between a parabolic track shot by the entity camera and a real track is large, and position information of the parabolic track in the entity camera coordinate system and the pose of the paraboloid in the preset coordinate system are obtained based on a plurality of images of a parabolic sorting video; if not, the deviation between the parabolic track shot by the entity camera and the real track is small, and violent sorting behaviors are identified based on the position information of the parabolic track in the entity camera coordinate system. The preset angle value is set according to specific conditions, for example, 30 degrees, 50 degrees, and the like.
In the embodiment of the present application, the pose of the virtual camera in the preset coordinate system is determined based on the pose of the paraboloid 28, wherein the included angle between the optical axis of the virtual camera and the paraboloid 28 is greater than the included angle between the optical axis of the physical camera 200 and the paraboloid 28. Specifically, the included angle between the straight line and the plane is defined to be 0 to 90 degrees.
In a preferred embodiment, the optical axis of the virtual camera is perpendicular to the paraboloid 28, thereby further avoiding distortion of the parabolic trajectory caused by the inclination of the optical axis of the camera with the paraboloid 28.
Further, the physical camera 200 is located at the same position as the virtual camera. That is, the origin of the virtual camera coordinate system is O2. Therefore, when the coordinates of the intersection of the paraboloid 28 and the Z axis of the predetermined coordinate system are (0, 0, dn), the included angle between the paraboloid 28 and the XOY plane of the predetermined coordinate system is θ2The coordinates of the physical camera 200 in the preset coordinate system are (0, h, 0), and the posture of the physical camera 200 in the preset coordinate system is: angle of pitch theta1In time, the pose of the virtual camera is known as follows: roll angle theta2Coordinates (0, h, 0). Of course, in other embodiments, the position of the virtual camera may also be set according to specific situations, which is not limited in this application.
S25, correcting the position information of the parabolic track in the coordinate system of the physical camera based on the pose of the physical camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system and the pose of the paraboloid in the preset coordinate system to obtain the position information of the parabolic track in the coordinate system of the virtual camera.
To illustrate the specific method of correcting the parabolic trajectory of the present application, the following assumptions are made:
the coordinate of the intersection point of the paraboloid 28 and the Z axis of the preset coordinate system is (0, 0, dn), and the included angle between the paraboloid 28 and the XOY plane of the preset coordinate system is theta2(ii) a The coordinates of the physical camera 200 in the preset coordinate system are (0, h, 0), and the physical camera 200 in the preset coordinatesPitch angle under tie is θ1When the current is over; the virtual camera has coordinates (0, h, 0) in a preset coordinate system, and poses: roll angle theta2. The point P is an arbitrary point on the parabolic trajectory.
In the embodiment of the present application, referring to fig. 6, fig. 6 is a schematic flowchart of S25 in the violent sorting behavior identification method of fig. 2.
As shown in fig. 6, in the embodiment of the present application, S25 includes S251, S252, S253, S254, and S254.
The method comprises the following specific steps:
and S251, calculating a first pose transformation relation between the coordinate system of the physical camera and the preset coordinate system based on the pose of the physical camera in the preset coordinate system.
The first attitude transformation relation comprises a first rotation matrix and a first translation matrix.
In one embodiment, the first rotation matrix between the physical camera coordinate system and the predetermined coordinate system satisfies the relationship shown in formula (1),
Figure BDA0002339270710000131
wherein R is1Is the first rotation matrix.
The first translation matrix between the physical camera coordinate system and the predetermined coordinate system satisfies the relationship shown in formula (1),
Figure BDA0002339270710000132
wherein, T1Is the first rotation matrix.
Combining the formulas (1) and (2), the first attitude transformation relationship between the physical camera coordinate system and the preset coordinate system satisfies the relationship shown in the formula (3),
Figure BDA0002339270710000141
wherein the content of the first and second substances,
Figure BDA0002339270710000142
is the coordinates of point P in the physical camera coordinate system,
Figure BDA0002339270710000143
is the coordinate of the point P in the preset coordinate system.
And S252, calculating a second position and posture conversion relation between the paraboloid of the paraboloid and the preset coordinate system based on the position and posture of the paraboloid in the preset coordinate system.
And the second posture conversion relation comprises a second rotation matrix and a second translation matrix.
In one embodiment, the parabolic coordinate system is established with the intersection of the Z axis of the parabolic surface 28 and the predetermined coordinate system as the origin.
The first rotation matrix between the parabolic coordinate system of the paraboloid 28 and the preset coordinate system satisfies the relationship shown in equation (4),
Figure BDA0002339270710000144
wherein R is2Is the second rotation matrix.
The second translation matrix between the parabolic coordinate system of the paraboloid 28 and the predetermined coordinate system satisfies the relationship shown in equation (5),
Figure BDA0002339270710000151
wherein, T2Is the second rotation matrix.
The second posture conversion relationship between the paraboloid 28 and the preset coordinate system satisfies the relationship shown in the formula (6),
Figure BDA0002339270710000152
wherein the content of the first and second substances,
Figure BDA0002339270710000153
is the coordinates of point P in the physical camera coordinate system,
Figure BDA0002339270710000154
is the coordinate of the point P in the preset coordinate system.
And S253, calculating a third pose transformation relation between the coordinate system of the virtual camera and the preset coordinate system based on the pose of the virtual camera in the preset coordinate system and the pose of the entity camera in the preset coordinate system.
And the third posture conversion relation comprises a third rotation matrix and a third translation matrix.
In one embodiment, the first rotation matrix between the virtual camera coordinate system and the predetermined coordinate system satisfies the relationship shown in equation (7),
Figure BDA0002339270710000155
wherein R is3Is a third rotation matrix.
The third translation matrix between the virtual camera coordinate system and the preset coordinate system satisfies the relationship as shown in equation (8),
Figure BDA0002339270710000161
wherein, T3Is a third translation matrix.
In one embodiment, the third posture conversion relationship between the virtual camera coordinate system and the predetermined coordinate system satisfies the relationship shown in equation (9),
Figure BDA0002339270710000162
wherein the content of the first and second substances,
Figure BDA0002339270710000163
is that point P is at the virtual cameraThe coordinates of the object under the coordinate system,
Figure BDA0002339270710000164
is the coordinate of the point P in the preset coordinate system.
And S254, determining a fourth pose conversion relation between the entity camera coordinate system and the virtual camera coordinate system based on the first pose conversion relation, the second pose conversion relation and the third pose conversion relation.
Specifically, the relationship shown in the formula (10) can be obtained according to the first posture conversion relationship and the second posture conversion relationship,
Figure BDA0002339270710000165
according to the second posture conversion relation and the third posture conversion relation, the relation shown in the formula (11) can be obtained,
Figure BDA0002339270710000171
obviously, by combining equations (10) and (11), the fourth pose transformation relationship between the physical camera coordinate system and the virtual camera coordinate system can be obtained, as shown in equation (12),
Figure BDA0002339270710000172
wherein H1And is obtained from the formula (10), H2And obtained from formula (11).
And S255, correcting the position information of the parabolic track in the physical camera coordinate system based on the fourth pose conversion relation to obtain the position information of the parabolic track in the virtual camera coordinate system.
In S254, a fourth pose translation relationship between the physical camera coordinate system and the virtual camera coordinate system has been obtained. Therefore, the coordinates of the parabolic trajectory on the real coordinate system are input into the formula (12) one by one, and the coordinates of the parabolic trajectory on the virtual camera coordinate system can be obtained. So that the position information of the parabolic track in the virtual camera coordinate system can be obtained.
And S26, identifying violent sorting behaviors based on the position information of the parabolic track in the virtual camera coordinate system.
In the embodiment of the application, the start point coordinate and the end point coordinate of the parabolic track in the virtual camera coordinate system are obtained based on the position information of the parabolic track in the virtual camera coordinate system; determining a parabolic distance based on a starting point coordinate and an end point coordinate of the parabolic track in a virtual camera coordinate system; violent sorting behavior is identified based on the parabolic distance.
In a specific embodiment, whether the parabolic distance is greater than a preset value is judged, and if yes, the sorting behavior corresponding to the parabolic track is judged to be violent sorting behavior; if not, judging that the sorting behavior corresponding to the parabolic track is not a violent sorting behavior.
According to the violent sorting behavior identification method, the pose of the virtual camera in the preset coordinate system is calculated based on the pose of the paraboloid in the preset coordinate system, then the position information of the parabolic track in the entity camera coordinate system is corrected based on the pose of the entity camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system and the pose of the paraboloid in the preset coordinate system, the position information of the parabolic track in the virtual camera coordinate system is obtained, and finally violent sorting behavior is identified based on the position information of the parabolic track in the virtual camera coordinate system. Because the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the entity camera and the paraboloid, the parabolic track shot by the virtual camera can be closer to the real parabolic track than the parabolic track shot by the entity camera, after the position information of the parabolic track in the coordinate system of the entity camera is corrected into the position information of the parabolic track in the coordinate system of the virtual camera, the obtained parabolic track is closer to the real parabolic track, and the accuracy of violent sorting behavior identification can be further improved.
In order to better implement the violent sorting behavior recognition method in the embodiment of the present application, in addition to the violent sorting behavior recognition method, a violent sorting behavior recognition apparatus is further provided in the embodiment of the present application, as shown in fig. 7, fig. 7 is a schematic structural diagram of an embodiment of the violent sorting behavior recognition apparatus provided in the embodiment of the present application, and the violent sorting behavior recognition apparatus includes a first acquisition unit 401, a second acquisition unit 402, a third acquisition unit 403, a pose calculation unit 404, a correction unit 405, and a recognition unit 406:
a first obtaining unit 401, configured to obtain a parabolic sorting video;
a second obtaining unit 402, configured to obtain a pose of the physical camera in a preset coordinate system;
a third obtaining unit 403, configured to obtain, based on a plurality of images of the parabolic sorting video, position information of a parabolic track in a coordinate system of the physical camera and a pose of a parabolic surface in a preset coordinate system, where the parabolic surface is a plane where the parabolic track is located;
a pose calculation unit 404, configured to calculate a pose of the virtual camera in a preset coordinate system based on a pose of the paraboloid in the preset coordinate system, where an included angle between an optical axis of the virtual camera and the paraboloid is greater than an included angle between an optical axis of the physical camera and the paraboloid;
the correcting unit 405 is configured to correct the position information of the parabolic track in the coordinate system of the physical camera based on the pose of the physical camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system, and the pose of the paraboloid in the preset coordinate system, so as to obtain the position information of the parabolic track in the coordinate system of the virtual camera;
a recognition unit 406 for recognizing violent sorting behavior based on the position information of the parabolic trajectory in the virtual camera coordinate system.
The first obtaining unit 401 is further configured to obtain a sorted video;
segmenting the sorting video to obtain a plurality of sorting sub-videos;
and carrying out image detection on the images in the sorting sub-videos so as to extract the parabolic sorting videos from the plurality of sorting sub-videos.
The third acquiring unit 403 is further configured to acquire a throwing area;
intercepting a plurality of images in the parabolic sorting video based on the throwing area to obtain a plurality of intercepted images;
carrying out image detection on the plurality of intercepted images to acquire position information of the parabolic track under an entity camera coordinate system;
and acquiring the pose of the paraboloid under a preset coordinate system.
The third obtaining unit 403 is further configured to obtain a preset image from the plurality of captured images;
drawing a parabolic track on a preset image based on the position information of the parabolic track in the entity camera coordinate system to obtain a parabolic image, wherein the parabolic image comprises the parabolic track and a shooting background image;
and performing regression analysis on the parabolic image to obtain the pose of the paraboloid under a preset coordinate system.
The third obtaining unit 403 is further configured to perform image fusion on multiple images in the parabolic sorting video to obtain a fused image;
and carrying out image detection on the fused image to acquire the throwing area.
The correcting unit 405 is further configured to calculate a first pose transformation relationship between the coordinate system of the physical camera and the preset coordinate system based on the pose of the physical camera in the preset coordinate system;
calculating a second position and posture conversion relation between the paraboloid coordinate system of the paraboloid and the preset coordinate system based on the position and posture of the paraboloid in the preset coordinate system;
calculating a third pose transformation relation between the coordinate system of the virtual camera and the preset coordinate system based on the pose of the virtual camera in the preset coordinate system and the pose of the virtual camera in the preset coordinate system;
determining a fourth pose conversion relation between the entity camera coordinate system and the virtual camera coordinate system based on the first pose conversion relation, the second pose conversion relation and the third pose conversion relation;
and correcting the position information of the parabolic track in the physical camera coordinate system based on the fourth pose conversion relation to obtain the position information of the parabolic track in the virtual camera coordinate system.
The identification unit 406 is further configured to obtain a start point coordinate and an end point coordinate of the parabolic track in the virtual camera coordinate system based on the position information of the parabolic track in the virtual camera coordinate system;
determining a parabolic distance based on a starting point coordinate and an end point coordinate of the parabolic track in a virtual camera coordinate system;
violent sorting behavior is identified based on the parabolic distance.
The violent sorting behavior recognition device calculates the pose of the virtual camera in the preset coordinate system based on the pose of the paraboloid in the preset coordinate system, corrects the position information of the parabolic track in the entity camera coordinate system based on the pose of the entity camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system and the pose of the paraboloid in the preset coordinate system to obtain the position information of the parabolic track in the virtual camera coordinate system, and finally recognizes violent sorting behaviors based on the position information of the parabolic track in the virtual camera coordinate system. Because the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the entity camera and the paraboloid, the parabolic track shot by the virtual camera can be closer to the real parabolic track than the parabolic track shot by the entity camera, after the position information of the parabolic track in the coordinate system of the entity camera is corrected into the position information of the parabolic track in the coordinate system of the virtual camera, the obtained parabolic track is closer to the real parabolic track, and the accuracy of violent sorting behavior identification can be further improved.
The embodiment of the application also provides the electronic equipment. As shown in fig. 8, fig. 8 is a schematic structural diagram of an embodiment of an electronic device provided in an embodiment of the present application, specifically:
the electronic device may include components such as a processor 501 of one or more processing cores, memory 502 of one or more computer-readable storage media, a power supply 503, and an input unit 504. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 8 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 501 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 502 and calling data stored in the memory 502, thereby performing overall monitoring of the electronic device. Optionally, processor 501 may include one or more processing cores; preferably, the processor 501 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 501.
The memory 502 may be used to store software programs and modules, and the processor 501 executes various functional applications and data processing by operating the software programs and modules stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the processor 501 with access to the memory 502.
The electronic device further comprises a power supply 503 for supplying power to each component, and preferably, the power supply 503 may be logically connected to the processor 501 through a power management system, so that functions of managing charging, discharging, power consumption, and the like are realized through the power management system. The power supply 503 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The electronic device may also include an input unit 504, where the input unit 504 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the electronic device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 501 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 502 according to the following instructions, and the processor 501 runs the application program stored in the memory 502, so as to implement various functions as follows:
acquiring a parabolic sorting video; acquiring the pose of the entity camera under a preset coordinate system; acquiring position information of a parabolic track under an entity camera coordinate system and a pose of a paraboloid under a preset coordinate system based on a plurality of images of a parabolic sorting video, wherein the paraboloid is a plane where the parabolic track is located; calculating the pose of the virtual camera in a preset coordinate system based on the pose of the paraboloid in the preset coordinate system, and determining that the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the entity camera and the paraboloid; correcting the position information of the parabolic track in the coordinate system of the entity camera based on the pose of the entity camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system and the pose of the paraboloid in the preset coordinate system to obtain the position information of the parabolic track in the coordinate system of the virtual camera; violent sorting behavior is identified based on position information of the parabolic trajectory in a virtual camera coordinate system.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer-readable storage medium, which may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like. Stored thereon, a computer program is loaded by a processor to perform the steps of any of the methods for identifying violent sorting behavior provided by the embodiments of the present application. For example, the computer program may be loaded by a processor to perform the steps of:
acquiring a parabolic sorting video; acquiring the pose of the entity camera under a preset coordinate system; acquiring position information of a parabolic track under an entity camera coordinate system and a pose of a paraboloid under a preset coordinate system based on a plurality of images of a parabolic sorting video, wherein the paraboloid is a plane where the parabolic track is located; calculating the pose of the virtual camera in a preset coordinate system based on the pose of the paraboloid in the preset coordinate system, and determining that the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the entity camera and the paraboloid; correcting the position information of the parabolic track in the coordinate system of the entity camera based on the pose of the entity camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system and the pose of the paraboloid in the preset coordinate system to obtain the position information of the parabolic track in the coordinate system of the virtual camera; violent sorting behavior is identified based on position information of the parabolic trajectory in a virtual camera coordinate system.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed descriptions of other embodiments, and are not described herein again.
In a specific implementation, each unit or structure may be implemented as an independent entity, or may be combined arbitrarily to be implemented as one or several entities, and the specific implementation of each unit or structure may refer to the foregoing method embodiment, which is not described herein again.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
The method, the apparatus, and the computer-readable storage medium for identifying violent sorting behavior provided in the embodiments of the present application are described in detail above, and specific examples are applied herein to explain the principles and implementations of the present application, and the description of the embodiments is only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A violent sorting behavior identification method is characterized by comprising the following steps:
acquiring a parabolic sorting video;
acquiring the pose of the entity camera under a preset coordinate system;
acquiring position information of a parabolic track under an entity camera coordinate system and a pose of a paraboloid under a preset coordinate system based on a plurality of images of the parabolic sorting video, wherein the paraboloid is a plane where the parabolic track is located;
calculating the pose of a virtual camera in a preset coordinate system based on the pose of the paraboloid in the preset coordinate system, and determining that the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the entity camera and the paraboloid;
correcting the position information of the parabolic track in the coordinate system of the physical camera based on the pose of the physical camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system and the pose of the paraboloid in the preset coordinate system to obtain the position information of the parabolic track in the coordinate system of the virtual camera;
identifying violent sorting behavior based on the position information of the parabolic track in the virtual camera coordinate system.
2. The violent sorting behavior recognition method of claim 1, wherein the acquiring a parabolic sorting video comprises:
acquiring a sorting video;
segmenting the sorting video to obtain a plurality of sorting sub-videos;
performing image detection on images in the sorted sub-videos to extract a parabolic sorted video from the plurality of sorted sub-videos.
3. The violent sorting behavior recognition method according to claim 1, wherein the acquiring position information of a parabolic track in a physical camera coordinate system and a pose of a parabolic surface in a preset coordinate system based on a plurality of images of the parabolic sorting video comprises:
acquiring a throwing area;
intercepting a plurality of images in the parabolic sorting video based on the throwing area to obtain a plurality of intercepted images;
performing image detection on the plurality of intercepted images to acquire position information of the parabolic track under a coordinate system of an entity camera;
and acquiring the pose of the paraboloid under a preset coordinate system.
4. The violent sorting behavior recognition method according to claim 3, wherein the acquiring the pose of the paraboloid under a preset coordinate system comprises:
acquiring a preset image from the plurality of intercepted images;
drawing the parabolic track on the preset image based on the position information of the parabolic track in an entity camera coordinate system to obtain the parabolic image, wherein the parabolic image comprises the parabolic track and a shooting background image;
and performing regression analysis on the parabolic image to obtain the pose of the paraboloid under a preset coordinate system.
5. The violent sorting behavior recognition method of claim 3, wherein the acquiring a throw area comprises:
carrying out image fusion on a plurality of images in the parabolic sorting video to obtain a fused image;
and carrying out image detection on the fused image to acquire the throwing area.
6. The violent sorting behavior recognition method of claim 1, wherein the correcting the position information of the parabolic track in the physical camera coordinate system based on the pose of the physical camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system and the pose of the parabolic surface in the preset coordinate system to obtain the position information of the parabolic track in the virtual camera coordinate system comprises:
calculating a first pose transformation relation between the coordinate system of the physical camera and the preset coordinate system based on the pose of the physical camera in the preset coordinate system;
calculating a second position and posture conversion relation between the paraboloid coordinate system of the paraboloid and the preset coordinate system based on the position and posture of the paraboloid under the preset coordinate system;
calculating a third pose transformation relation between the virtual camera coordinate system and the preset coordinate system based on the pose of the virtual camera in the preset coordinate system and the pose of the physical camera in the preset coordinate system;
determining a fourth pose transformation relationship between the physical camera coordinate system and the virtual camera coordinate system based on the first pose transformation relationship, the second pose transformation relationship and the third pose transformation relationship;
and correcting the position information of the parabolic track in the physical camera coordinate system based on the fourth pose conversion relation to obtain the position information of the parabolic track in the virtual camera coordinate system.
7. The violent sorting behavior recognition method of claim 1, wherein the recognizing violent sorting behavior based on the position information of the parabolic trajectory in the virtual camera coordinate system comprises:
acquiring a starting point coordinate and an end point coordinate of the parabolic track in the virtual camera coordinate system based on the position information of the parabolic track in the virtual camera coordinate system;
determining a parabolic distance based on a start point coordinate and an end point coordinate of the parabolic track in the virtual camera coordinate system;
violent sorting behavior is identified based on the parabolic distance.
8. An apparatus for identifying violent sorting activities, comprising:
the first acquisition unit is used for acquiring a parabolic sorting video;
the second acquisition unit is used for acquiring the pose of the entity camera under a preset coordinate system;
a third obtaining unit, configured to obtain, based on a plurality of images of the parabolic sorting video, position information of a parabolic track in an entity camera coordinate system and a pose of a parabolic surface in a preset coordinate system, where the parabolic surface is a plane where the parabolic track is located;
the pose calculation unit is used for calculating the pose of the virtual camera in a preset coordinate system based on the pose of the paraboloid in the preset coordinate system and determining that the included angle between the optical axis of the virtual camera and the paraboloid is larger than the included angle between the optical axis of the entity camera and the paraboloid;
the correction unit is used for correcting the position information of the parabolic track in the coordinate system of the physical camera based on the pose of the physical camera in the preset coordinate system, the pose of the virtual camera in the preset coordinate system and the pose of the paraboloid in the preset coordinate system to obtain the position information of the parabolic track in the coordinate system of the virtual camera;
an identification unit for identifying violent sorting behavior based on the position information of the parabolic track in the virtual camera coordinate system.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory; and
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the processor to implement the violent sorting behavior recognition method of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which is loaded by a processor to carry out the steps in the violent sorting behavior recognition method of any of claims 1 to 7.
CN201911369379.2A 2019-12-26 2019-12-26 Violent sorting behavior identification method and device and computer readable storage medium Active CN113051968B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911369379.2A CN113051968B (en) 2019-12-26 2019-12-26 Violent sorting behavior identification method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911369379.2A CN113051968B (en) 2019-12-26 2019-12-26 Violent sorting behavior identification method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113051968A true CN113051968A (en) 2021-06-29
CN113051968B CN113051968B (en) 2024-03-01

Family

ID=76505564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911369379.2A Active CN113051968B (en) 2019-12-26 2019-12-26 Violent sorting behavior identification method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113051968B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106000904A (en) * 2016-05-26 2016-10-12 北京新长征天高智机科技有限公司 Automatic sorting system for household refuse
WO2017027172A1 (en) * 2015-08-13 2017-02-16 Google Inc. Systems and methods to transition between viewpoints in a three-dimensional environment
CN107358194A (en) * 2017-07-10 2017-11-17 南京邮电大学 A kind of violence sorting express delivery determination methods based on computer vision
CN108555908A (en) * 2018-04-12 2018-09-21 同济大学 A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras
CN109126121A (en) * 2018-06-01 2019-01-04 成都通甲优博科技有限责任公司 AR terminal interconnected method, system, device and computer readable storage medium
CN110332887A (en) * 2019-06-27 2019-10-15 中国地质大学(武汉) A kind of monocular vision pose measurement system and method based on characteristic light punctuate

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017027172A1 (en) * 2015-08-13 2017-02-16 Google Inc. Systems and methods to transition between viewpoints in a three-dimensional environment
CN106000904A (en) * 2016-05-26 2016-10-12 北京新长征天高智机科技有限公司 Automatic sorting system for household refuse
CN107358194A (en) * 2017-07-10 2017-11-17 南京邮电大学 A kind of violence sorting express delivery determination methods based on computer vision
CN108555908A (en) * 2018-04-12 2018-09-21 同济大学 A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras
CN109126121A (en) * 2018-06-01 2019-01-04 成都通甲优博科技有限责任公司 AR terminal interconnected method, system, device and computer readable storage medium
CN110332887A (en) * 2019-06-27 2019-10-15 中国地质大学(武汉) A kind of monocular vision pose measurement system and method based on characteristic light punctuate

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘凌云;罗敏;吴岳敏;李慧玲;马彬;: "基于半径约束的空间圆弧位姿视觉检测算法", 组合机床与自动化加工技术, no. 01 *

Also Published As

Publication number Publication date
CN113051968B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
CN108764048B (en) Face key point detection method and device
US10936911B2 (en) Logo detection
CN107909600B (en) Unmanned aerial vehicle real-time moving target classification and detection method based on vision
US20180018503A1 (en) Method, terminal, and storage medium for tracking facial critical area
CN110633629A (en) Power grid inspection method, device, equipment and storage medium based on artificial intelligence
CN109934065B (en) Method and device for gesture recognition
CN107633208B (en) Electronic device, the method for face tracking and storage medium
US20210073973A1 (en) Method and apparatus for component fault detection based on image
CN111667001B (en) Target re-identification method, device, computer equipment and storage medium
US9824267B2 (en) Writing board detection and correction
JP2020149111A (en) Object tracking device and object tracking method
US10600202B2 (en) Information processing device and method, and program
CN108648141B (en) Image splicing method and device
CN112634366B (en) Method for generating position information, related device and computer program product
CN113326836A (en) License plate recognition method and device, server and storage medium
CN115471439A (en) Method and device for identifying defects of display panel, electronic equipment and storage medium
WO2021138893A1 (en) Vehicle license plate recognition method and apparatus, electronic device, and storage medium
CN113051968B (en) Violent sorting behavior identification method and device and computer readable storage medium
CN114372993B (en) Layered detection method and system for oblique-shooting shelf based on image correction
CN112802112B (en) Visual positioning method, device, server and storage medium
CN114170576A (en) Method and device for detecting repeated images
CN115619698A (en) Method and device for detecting defects of circuit board and model training method
CN113538449A (en) Image correction method, device, server and storage medium
CN113516013A (en) Target detection method and device, electronic equipment, road side equipment and cloud control platform
CN112991446A (en) Image stabilization method and device, road side equipment and cloud control platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant