WO2019184275A1 - Image processing method, device and system - Google Patents
Image processing method, device and system Download PDFInfo
- Publication number
- WO2019184275A1 WO2019184275A1 PCT/CN2018/106752 CN2018106752W WO2019184275A1 WO 2019184275 A1 WO2019184275 A1 WO 2019184275A1 CN 2018106752 W CN2018106752 W CN 2018106752W WO 2019184275 A1 WO2019184275 A1 WO 2019184275A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- video frame
- label
- collection device
- tag
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Definitions
- the present application relates to the field of video surveillance technologies, and in particular, to an image processing method, device, and system.
- image acquisition devices are provided in many scenes, and related personnel can monitor the scene through video frame images collected by the device.
- the display content only includes the image itself and the acquisition time of the image.
- the user who views the video frame image he can only familiarize himself with the real environment corresponding to the video frame image, and understand the specific content contained in the image based on the real environment. It can be seen that this image display method is not intuitive and the display effect is poor.
- An object of the embodiments of the present application is to provide an image processing method, device, and system, which improve the display effect of a video frame image.
- an image processing method including:
- the video frame image after the tag is displayed according to the preset display rule.
- the video frame image is a panoramic image
- the first collecting device is configured to correspond to at least one second collecting device
- the second collecting device performs image capturing on the sub-scene corresponding to the panoramic image
- the method further includes:
- the step of determining at least one target location in the video frame image includes:
- the first collection device is an augmented reality AR panoramic camera.
- the step of generating a label according to the image of the sub-scene includes:
- the step of adding the target information in the sub-scene image to the content of the label includes:
- Identifying the sub-scene image determining target information in the sub-scene image according to the recognition result; adding the target information to the content of the label;
- the step of displaying the tagged video frame image according to the preset display rule includes:
- the content of the added tag is displayed.
- the step of displaying the tagged video frame image according to the preset display rule includes:
- the video frame image after the tag is added, and the content of the added tag.
- display the contents of the added tags including:
- the method further includes:
- the clicked label is determined as the target label
- the content of the target tag is displayed in the video frame image.
- the method before the step of determining the at least one target location in the video frame image, the method further includes:
- the step of determining at least one target location in the video frame image includes:
- a target location of the added tag is determined according to the tag addition instruction.
- the step of displaying the tagged video frame image according to the preset display rule includes:
- Determining a layer display strategy and determining, according to the layer display strategy, a current display layer and a display manner of the current display layer;
- the label corresponding to the current display layer is displayed.
- the step of acquiring the sub-scene image collected by the second collection device includes:
- the step of generating a label includes:
- the step of detecting whether an abnormal event occurs in the panoramic image comprises:
- the step of determining the target second collection device corresponding to the abnormal event includes:
- the method further includes:
- the step of displaying the tagged video frame image according to the preset display rule includes:
- the label is displayed in the video frame image in a preset alarm mode.
- an embodiment of the present application further discloses an image processing apparatus, including: a processor and a memory;
- a memory for storing a computer program
- the processor when used to execute the program stored on the memory, implements the following steps:
- the video frame image after the tag is displayed according to the preset display rule.
- the video frame image is a panoramic image
- the first collecting device is configured to correspond to at least one second collecting device
- the second collecting device performs image capturing on the sub-scene corresponding to the panoramic image
- the processor is further configured to implement the following steps:
- processor is further configured to implement the following steps:
- processor is further configured to implement the following steps:
- Identifying the sub-scene image determining target information in the sub-scene image according to the recognition result; adding the target information to the content of the label;
- processor is further configured to implement the following steps:
- the content of the added tag is displayed.
- processor is further configured to implement the following steps:
- the video frame image after the tag is added, and the content of the added tag.
- processor is further configured to implement the following steps:
- processor is further configured to implement the following steps:
- the clicked label is determined as the target label
- the content of the target tag is displayed in the video frame image.
- processor is further configured to implement the following steps:
- a target location of the added tag is determined according to the tag addition instruction.
- processor is further configured to implement the following steps:
- Determining a layer display strategy and determining, according to the layer display strategy, a current display layer and a display manner of the current display layer;
- the label corresponding to the current display layer is displayed.
- processor is further configured to implement the following steps:
- processor is further configured to implement the following steps:
- processor is further configured to implement the following steps:
- processor is further configured to implement the following steps:
- the label is displayed in the video frame image in a preset alarm mode.
- an embodiment of the present application further discloses an image processing system, including: a first collection device and an image processing device, where
- the first collecting device is configured to collect a video frame image, and send the collected video frame image to the image processing device;
- the image processing device is configured to determine, according to a video frame image acquired by the first collection device, at least one target location in the video frame image; add a label at each determined target location, the label is based on user input
- the content or the image acquired by the second collection device is generated; and the video frame image after the tag is added is displayed according to the preset display rule.
- the system further includes: at least one second collection device,
- the second collection device is configured to perform image collection on a sub-scene corresponding to the panoramic image, where the panoramic image is a video frame image collected by the first collection device;
- the image processing device is further configured to acquire a sub-scene image acquired by the second collection device; generate a label according to the sub-scene image; and determine, according to the calibration information of the first collection device and the second collection device acquired in advance The label corresponding to the second collection device is at a target position in the panoramic image.
- the first collection device is an augmented reality AR panoramic camera.
- an embodiment of the present application further discloses a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by a processor, implement any one of the foregoing image processing methods. .
- an embodiment of the present application further discloses an executable program code for being executed to execute any of the image processing methods described above.
- the label can help the user understand the specific content included in the video frame image, and therefore, adding the tagged video
- the frame image can display the image content more intuitively, and the display effect is better.
- FIG. 1 is a schematic diagram of a first process of an image processing method according to an embodiment of the present disclosure
- FIG. 1 is a schematic diagram of a display interface according to an embodiment of the present application.
- FIG. 1b is a schematic diagram of another display interface provided by an embodiment of the present application.
- FIG. 2 is a second schematic flowchart of an image processing method according to an embodiment of the present application.
- FIG. 2a is a schematic diagram of an application scenario provided by an embodiment of the present application.
- FIG. 3 is a schematic diagram of a third process of an image processing method according to an embodiment of the present disclosure.
- FIG. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
- FIG. 4b is a schematic structural diagram of another image processing device according to an embodiment of the present disclosure.
- FIG. 5 is a schematic structural diagram of an image processing system according to an embodiment of the present application.
- an embodiment of the present application provides an image processing method, device, and system.
- the method can be applied to various image processing devices, and is not specifically limited.
- FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure, including:
- S101 Determine, for the video frame image acquired by the first collection device, at least one target location in the video frame image.
- the processing object of the embodiment of the present application is a video frame image, and the image provided by the embodiment of the present application may be processed for each frame of the video.
- the target location there are several ways to determine the target location. For example, one or more fixed positions may be set in advance in the video frame image as the target position; for example, the intermediate position of the video frame image may be set as the target position.
- the user-specified location may be determined as the target location according to the user instruction. It should be noted that, for the same video, or multiple videos of the same scene, the user may only send an instruction once, according to the instruction, may be in this paragraph or The target position is determined in each of the plurality of pieces of video.
- the installation location of the first collection device is generally fixed, and the scene corresponding to the captured video frame image is also substantially unchanged. Therefore, in each video frame image, the difference in the screen content corresponding to the preset position is usually The difference in the screen content corresponding to the position specified in the above user command is usually not large, and the target position can be determined in the plurality of video frame images according to an instruction sent by the user.
- the way to determine the target location can also be other, and is not limited.
- S102 Add a label at each determined target position, and the label is generated according to an input instruction or an image acquired by the second collection device.
- the input command can be a tagged instruction entered by the user.
- the second collection device may be an acquisition device that is configured in the same scenario as the first collection device. For example, the first collection device performs image collection for the scene A, and the second collection device performs image collection for the sub-scenario A1 in the scene A.
- the label may include a "tag symbol” and a "tag content”.
- the "tag symbol” may be an arrow, a triangle, etc.
- the "tag symbol” is for marking a position in the video frame image.
- the specific format of the label is not limited; the content of the label may be an image collected by other collection devices, or may be some image analysis data, or may be associated data of the scene at the label, and the like, and is not limited.
- the image analysis data may be a face recognition result, a vehicle recognition result, or the like
- the associated data of the scene may be an introduction content of the scene, or if the scene is a traffic bayonet, the associated data may be traffic flow data or the like.
- the tag may also include a "tag name", for example, may be some simple text information, such as "some building", “some park” and the like.
- the input instruction is the text information “some building” input by the user and the specific introduction of the building
- a label may be generated, and the label symbol may be an arrow, and the label name may be the text information “some building”, the label
- the content can be a specific introduction to the building.
- the target location is a traffic bayonet
- the label content added at the traffic bayonet may be video data collected at the bayonet, a captured image at the bayonet, traffic flow data at the bayonet, and the like.
- the user can design his own label according to his own needs. Specifically, the user may click on a certain position in the video frame image and input some text or image content; the device performing the scheme may generate a corresponding label according to the content input by the user, and after the video frame image and the subsequent In the video frame image, the location clicked by the user is determined as the target location, and the generated tag is added at the target location.
- the label may be generated according to the image collected by the other collection device.
- the first collection device performs image collection for the scene A
- the second collection device performs image collection for the sub-scenario A1 in the scene A.
- the tag may be generated according to the image collected by the second collecting device, and the position corresponding to the sub-scene A1 is determined as the target position in S101, and the tag corresponding to the sub-scene A1 is added at the target position.
- the label added in the video frame image includes both the label generated according to the user's needs and the label generated according to the image collected by other collection devices, so that the label type is more abundant.
- the tag may include a "tag symbol” and a "tag content”.
- the "tag symbol” and the “tag content” may be separately displayed.
- the "tag symbol” may be added to the video frame.
- the "tag content” is displayed in an area other than the video frame image, so that the content of the tag does not cover the video frame image, and the display effect is better.
- the tag further includes a "tag name”
- the "tag name” may be displayed in the video frame image, or may be displayed in an area outside the video frame image, which is not limited.
- the video frame image after the tag is added may be displayed, and in the second area, the content of the added tag may be displayed.
- the first area and the second area may be different areas of the same display device, or may be a display area in an adjacent display device, which is not limited.
- the video frame image after adding the label and the content of the added label are displayed.
- the video frame image after the tag is added may be displayed in the main screen area, and the content of the added tag is displayed in the small screen area.
- the small screen area may be located at any position on the right side, the left side, the upper side, and the lower side of the main screen area, and is not limited.
- the "content of the tag” can be of various types, such as video data, captured images, image analysis data, and the like, and different types of data can be displayed in different areas.
- the video data and the captured image may be displayed in the small screen area or the second area in the above picture
- the image analysis data may be displayed in the video frame image, etc., and the specific display manner is not limited.
- the specific shape, color, transparency, and specific type of "tag content” of the "tag symbol” may be set in advance or may be changed according to user selection.
- the current display label may be determined; and the content of the current display label is displayed.
- the display order can be set, and the current display label is determined according to the order.
- the display order can be determined randomly, or can be set according to the importance degree of each label.
- the label corresponding to the display instruction is determined as the current display label, and the like, and is not limited.
- the clicked tag may be determined as the target tag; the content of the target tag is displayed in the video frame image.
- the content of the label can be directly displayed in the video frame image.
- a layer classification policy may be preset, and according to the policy, a layer category corresponding to each label is determined. In other words, it is to divide each label into different layer categories. For example, you can divide labels into intersection label layers, bayonet label layers, area label layers, building label layers, and more.
- the layer display strategy can be determined based on user instructions.
- the layer display strategy can include the current display layer and how the current display layer is displayed.
- the user instruction only includes the current display layer information, and the device determines the current display layer according to the user instruction.
- the device stores the display manner corresponding to each layer, so that the device can further determine the current display image.
- the display mode of the layer in the second case, the user instruction includes the current display layer information and the display mode information, and the device can determine the current display layer and the display mode of the current display layer according to the user instruction, which are all reasonable.
- the display mode may include: flashing display, jitter display, static display, etc., and is not limited.
- the label is displayed separately from the content of the label, and the display manner may include the manner in which the label is displayed, or the manner in which the label content is displayed, for example, a building label.
- the corresponding display manner of the layer may be: the label is displayed in the video frame image, and the corresponding label content is flashed in other areas (the second area or the picture-in-picture area).
- the detailed image corresponding to the video frame image collected by the first collection device may be acquired, and after S101, according to the pixel point correspondence between the detail image and the video frame image acquired in advance, Determining that the target location corresponds to the location in the detail image as the to-be-processed location; adding the label added at the target location to the to-be-processed location corresponding to the target location; in this embodiment, S103 may include: The video frame image after the tag is added and the detailed image after the tag is displayed according to the preset display rule.
- the video frame image acquired in S101 may be a panoramic image, and in addition, a detailed image corresponding to the panoramic image may be acquired, and the panoramic view is obtained according to a pixel point correspondence relationship between the panoramic image and the detailed image.
- the label added to the image corresponds to the detail image, and the label is also added in the detail image.
- the third collection device may be disposed outside the first collection device, where the first collection device and the third collection device perform image collection for the same scene, the first collection device collects the panoramic image, and the third collection device collects the detailed image.
- the third collecting device can be a ball machine, the ball machine can be rotated, and detailed images of different viewing angles can be collected.
- the pixel point correspondence between the panoramic image and the detail image may be obtained according to calibration information between the first collection device and the third collection device.
- the dome camera can collect detailed images corresponding to the four regions, which are the detail image B1 and the detail image B2, respectively.
- the four detail images can be displayed in turn in a preset order.
- the currently displayed detail image is B1
- 10 target positions are determined in area 1
- labels are added for the 10 target positions, correspondingly, there are also 10 pending positions in the detail image B1
- Add the same 10 labels to these 10 pending locations since the number of tags is large, only a part of the tags may be displayed in the area 1 of the panoramic image A, and the 10 tags are displayed in the detail image B1.
- the video frame image after the label is added may be displayed in the first area, and the detailed image after the label is added may be displayed in the third area; or, the label may be displayed in the form of picture-in-picture The video frame image and the detailed image after the tag is added.
- the display label described here only displays the "tag symbol” and displays the "tag content” in another area.
- the image of the added video frame may be displayed, in the second area, the content of the added label is displayed, and in the third area, the detailed image after the label is added.
- the first area, the second area, and the third area mentioned herein may be different areas of the same display device, or may be display areas in different display devices.
- the video frame image after the tag is added, the detailed image after the tag is added, and the content of the added tag can be displayed in the form of picture-in-picture.
- the video frame image after adding the label is displayed in the main screen area
- the detailed image after adding the label is displayed in the small screen area in the lower left corner
- the content of the added label is displayed in the small screen area on the right side.
- the "tag name” may be displayed in the video frame image, or may be displayed in an area outside the video frame image, which is not limited.
- FIG. 2 is a second schematic flowchart of an image processing method according to an embodiment of the present disclosure. The embodiment shown in FIG. 2 is based on the embodiment shown in FIG.
- S201 Acquire a sub-scene image collected by the second collection device.
- the video frame image collected by the first collection device is a panoramic image
- the first collection device corresponds to at least one second collection device
- the second collection device performs image collection on the sub-scene corresponding to the panoramic image.
- the image collected by the second collection device is a sub-scene image.
- the first collection device may be an augmented reality AR panoramic camera, so that the collected panoramic image is better.
- the first collecting device may also be a plurality of guns, and the images collected by the plurality of guns are spliced to obtain a panoramic image.
- the second collection device can be an ordinary camera, such as a ball machine, a capture machine, and the like. If the second collection device is a dome camera, the sub-scene image may be a surveillance video image. If the second collection device is a capture camera, the sub-scene image may be a snapshot image, and the like, which is not limited.
- a large scene A includes four sub-scenes: A1, A2, A3, and A4.
- the first collection device performs image collection on scene A
- the second collection device 1 performs A1 on A1.
- Image acquisition the second collection device 2 performs image acquisition on A2
- the second collection device 3 performs image acquisition on A3, and the second collection device 4 performs image acquisition on A4.
- the first collection device and the second collection device may be the same device, such as an AR eagle eye device, and the AR eagle eye device has an augmented reality function.
- the AR eagle eye device may be integrated with a plurality of camera lenses and one In the ball machine lens, the image obtained by splicing the plurality of camera lenses can be used as a panoramic image, and the image captured by the camera lens is used as a sub-scene image.
- the AR Hawkeye device can also be provided with a platform for scheduling and managing the plurality of camera lenses and a dome camera lens.
- the second collection device sends the collected sub-scene image to the device that executes the solution in real time.
- the device that executes the solution acquires the sub-scene image from the second collection device after receiving the user instruction.
- the device that executes the solution acquires the sub-scene image from the second collection device corresponding to the abnormal event after detecting an abnormal event in the video frame image (panoramic image) of the S101.
- the abnormal event may be a traffic accident, a robbery event, etc., and is not limited.
- the embodiment of the present application does not limit the timing of acquiring a sub-scene image.
- the label may include a "tag symbol” and a "tag content”.
- the "tag symbol” may be an arrow, a triangle, etc., and the "tag symbol” is for marking a position in the video frame image.
- the label the specific form is not limited; the "content of the label” may include the sub-scene image.
- the tag may also include a "tag name", for example, may be some simple text information, such as "some building", “some park” and the like.
- the sub-scene image and/or the target information in the sub-scene image may be added to the content of the tag.
- the tag contains the target information in the sub-scene image.
- the target information may include vehicle information in the image, such as a license plate number, a vehicle body color, etc., and may also include road information, such as traffic flow in the road; or
- vehicle information in the image such as a license plate number, a vehicle body color, etc.
- road information such as traffic flow in the road
- the target information may be abnormal event information, such as a traffic accident.
- the target information may be character information in the image, such as height, gender, etc.; or, in the third scheme, the target information may be abnormal event information, such as Robbery, fire, etc.
- the method for obtaining the target information is different.
- the device that executes the solution may identify the sub-scene image acquired by the S 201, and determine the target information in the sub-scene image according to the recognition result;
- the second collection device may have an image recognition function, and the second collection device sends the identified target information to the device;
- the server connected to the second collection device identifies the sub-scene image, and The identified target information is sent to the device; these methods are reasonable.
- the tag contains both the sub-scene image and the target information in the sub-scene image.
- the target information can be understood as an introduction or description of the sub-scene image, and the target information can be set around the sub-scene image so that the user can better understand what is happening in the sub-scene image.
- S101 may be S101A: determining, according to the calibration information of the first collection device and the second collection device that are acquired in advance, a target position of the label corresponding to the second collection device in the panoramic image.
- the calibration relationship can be understood as a relationship between the panoramic image coordinate system and the sub-scene image coordinate system. Conversion relationship. For example, there is a position X in the sub-scene A1, the pixel coordinate point of the position X in the panoramic image is (x1, y1), and the pixel coordinate point in the sub-scene image acquired by the second acquisition device 1 is (x2, Y2), the calibration relationship is the conversion relationship between (x1, y1) and (x2, y2).
- related information (calibration information) of the calibration relationship may be acquired in advance, and the calibration information may be used to determine a position of the label of the second collection device in the panoramic image.
- a third collection device is further disposed in addition to the first collection device and the second collection device.
- the first collection device is a plurality of guns
- the panoramic image is acquired
- the second collection device is a capture camera.
- the captured image is captured as a sub-scene image
- the third acquisition device acquires a detailed image.
- Determining at least one target position in the panoramic image and determining, according to the calibration information between the first collection device and the third collection device, the target position corresponding to the position in the detail image as the to-be-processed position;
- the panoramic image after the label is added, the detailed image after the label is added, and the content of the added label are displayed.
- the images collected by different devices can only be displayed separately (there is no relationship between the images). If the user needs to pay attention to the images collected by multiple devices, you need to switch back and forth between the images collected by the multiple devices. complex.
- the first collecting device collects the panoramic image
- the second collecting device collects the image of the sub-scene in the panoramic image to generate a sub-scene image; generates a label according to the sub-scene image, and adds the label to the label.
- the panoramic image the panoramic image after the label is displayed is displayed; thus, the solution displays the image (the panoramic image) collected by the first collection device and the image (label) collected by the second collection device, and the user does not display If you need to switch, you can pay attention to the images collected by multiple devices, and the operation is simple.
- an abnormal event may be detected in the panoramic image collected by the first collection device; if yes, the target second collection device corresponding to the abnormal event is determined; and the sub-scene image collected by the target second collection device is acquired.
- the abnormality model may be preset: according to the above description, the abnormal events may include traffic accidents, robberies, fires, etc., and these abnormal events may be simulated in advance to generate corresponding abnormal models.
- the panoramic image is then matched with the preset anomaly model. If the matching is successful, an abnormal event occurs in the panoramic image.
- the position where the match is successful is the position of the abnormal event in the panoramic image.
- the abnormal event alarm information sent by the other device or the user for the panoramic image may be received, and the alarm information is received, and an abnormal event occurs in the panoramic image.
- the device that implements the solution can communicate with other devices, and other devices can send abnormal event alarm information to the device after determining that an abnormal event occurs in the panoramic image.
- the user can also send an abnormal event alarm message to the device, which is also reasonable.
- the abnormal event alarm information may carry the position of the abnormal event in the panoramic image.
- the calibration relationship there is a calibration relationship between the first collection device and the four second collection devices.
- related information calibration information
- the calibration information can determine the target second collection device corresponding to the above-mentioned "position of the abnormal event in the panoramic image", that is, the second collection device that performs image acquisition for the sub-scene where the abnormal event is located.
- S202 is: generating a label corresponding to the abnormal event according to the sub-scene image.
- the focus area may be divided in the panoramic image in advance, and when an abnormal event occurs in the panoramic image is detected, it may be determined whether the position of the abnormal event in the panoramic image is at a preset. The focus area; if so, the label is displayed in the video frame image in a preset alarm mode.
- intersection A in the panoramic image is an area that needs to be focused
- the intersection A is set as the focus area in the panoramic image in advance; if an abnormal event occurs in the panoramic image, and the abnormal event occurs at the intersection In A, the label is displayed in the video frame image in a preset alarm mode.
- the content of the label and the label are separately displayed may be displayed in the second area or the picture-in-picture area by an alarm method, for example, the color change of the pop-up window, the pop-up window shake, etc.
- an alarm method for example, the color change of the pop-up window, the pop-up window shake, etc. The specific is not limited.
- FIG. 3 is a third schematic flowchart of an image processing method according to an embodiment of the present disclosure. The embodiment shown in FIG. 3 is based on the embodiment shown in FIG.
- S301 Receive a label adding instruction sent by a user.
- the user can click on a target such as a building or an intersection in a video frame image, and then input the content related to the target (target content), and the target content may include text information (such as a building name). , or other relevant instructions), or can also contain images.
- a target such as a building or an intersection in a video frame image
- the target content may include text information (such as a building name). , or other relevant instructions), or can also contain images.
- the tag addition instruction can carry the target location (the location clicked by the user) and the target content (the content, text or image input by the user).
- the user may also obtain the sub-scene image collected by the second collection device, and use the acquired sub-scene image as the target content, or the user may select the sub-scene image and the target information in the sub-scene image (as shown in FIG. 2
- the target information in the embodiment has the same meaning and will not be described again as the target content.
- S302 Generate a label according to the label adding instruction.
- the label may include a "tag symbol” and a "tag content”.
- the "tag symbol” may be an arrow, a triangle, etc.
- the "tag symbol” is for marking a position in the video frame image.
- the specific form of the label is not limited; in this embodiment, the target content input by the user may be used as the content of the label.
- the tag may also include a "tag name", for example, may be some simple text information, such as “some building”, “some park” and the like. It is also possible to use part of the content input by the above user as the name of the tag.
- S101 is S101B: determining the target position of the added tag according to the tag adding instruction.
- the target location is the location that the above user clicks.
- the location and content of the label are determined by the user, that is, the user can design his own label according to his own needs, and the user experience is better.
- the embodiment of the present application further provides an image processing device.
- the embodiment of the present application further provides an image processing device, as shown in FIG. 4a, comprising: a processor 401 and a memory 402;
- the processor 401 is configured to implement any of the above image processing methods when executing a program stored on the memory 402.
- FIG. 4b is a schematic structural diagram of another image processing apparatus according to an embodiment of the present disclosure, including: a housing 501, a processor 502, a memory 503, a circuit board 504, and a power supply circuit 505, wherein the circuit board 504 is disposed in the housing 501.
- the processor 502 and the memory 503 are disposed on the circuit board 504;
- the power supply circuit 505 is configured to supply power to the respective circuits or devices of the image processing apparatus;
- the memory 503 is configured to store executable program code;
- the processor 502 passes The executable program code stored in the memory 503 is read to execute a program corresponding to the executable program code for performing the following steps:
- the video frame image after the tag is displayed according to the preset display rule.
- the video frame image is a panoramic image
- the first collection device corresponds to at least one second collection device
- the second collection device performs image collection on the sub-scene corresponding to the panoramic image
- the processor is further configured to implement the following steps:
- the processor is further configured to implement the following steps:
- the processor is further configured to implement the following steps:
- Identifying the sub-scene image determining target information in the sub-scene image according to the recognition result; adding the target information to the content of the label;
- the processor is further configured to implement the following steps:
- the content of the added tag is displayed.
- the processor is further configured to implement the following steps:
- the video frame image after the tag is added, and the content of the added tag.
- the processor is further configured to implement the following steps:
- the processor is further configured to implement the following steps:
- the clicked label is determined as the target label
- the content of the target tag is displayed in the video frame image.
- the processor is further configured to implement the following steps:
- a target location of the added tag is determined according to the tag addition instruction.
- the processor is further configured to implement the following steps:
- Determining a layer display strategy and determining, according to the layer display strategy, a current display layer and a display manner of the current display layer;
- the label corresponding to the current display layer is displayed.
- the processor is further configured to implement the following steps:
- the processor is further configured to implement the following steps:
- the processor is further configured to implement the following steps:
- the processor is further configured to implement the following steps:
- the label is displayed in the video frame image in a preset alarm mode.
- the processor is further configured to implement the following steps:
- the video frame image after the tag is added and the detailed image after the tag is displayed according to the preset display rule.
- the processor is further configured to implement the following steps:
- the first area displaying the video frame image after the label is added; in the third area, displaying the detailed image after adding the label;
- the video frame image after the tag is added, and the detail image after the tag is added.
- the label can help the user understand the specific content included in the video frame image, and therefore, after adding the label
- the video frame image can display the image content more intuitively, and the display effect is better.
- the embodiment of the present application further provides an image processing system, where the system may include: a first collection device and an image processing device, where
- the first collecting device is configured to collect a video frame image, and send the collected video frame image to the image processing device;
- the image processing device is configured to determine, according to a video frame image acquired by the first collection device, at least one target location in the video frame image; adding a label at each determined target location, the label according to the input instruction Or the image collected by the second collection device is generated; and the video frame image after the label is added is displayed according to the preset display rule.
- the system further includes: at least one second collection device (second acquisition device 1, second collection device 2, second collection device 3, and second collection device 4) ,
- the second collection device is configured to perform image collection on a sub-scene corresponding to the panoramic image, where the panoramic image is a video frame image collected by the first collection device;
- the image processing device is further configured to acquire a sub-scene image acquired by the second collection device; generate a label according to the sub-scene image; and determine, according to the calibration information of the first collection device and the second collection device acquired in advance The label corresponding to the second collection device is at a target position in the panoramic image.
- the image processing device in this embodiment may be a platform device, which may acquire resources from multiple collection devices, display images, and interact with users.
- the first collection device is an augmented reality AR panoramic camera.
- the image processing device can also be used to:
- the image processing device can also be used to:
- Identifying the sub-scene image determining target information in the sub-scene image according to the recognition result; adding the target information to the content of the label;
- the image processing device can also be used to:
- the content of the added tag is displayed.
- the image processing device can also be used to:
- the video frame image after the tag is added, and the content of the added tag.
- the image processing device can also be used to:
- the image processing device can also be used to:
- the clicked label is determined as the target label
- the content of the target tag is displayed in the video frame image.
- the image processing device can also be used to:
- a target location of the added tag is determined according to the tag addition instruction.
- the image processing device can also be used to:
- Determining a layer display strategy and determining, according to the layer display strategy, a current display layer and a display manner of the current display layer;
- the label corresponding to the current display layer is displayed.
- the image processing device can also be used to:
- the image processing device can also be used to:
- the image processing device can also be used to:
- the image processing device can also be used to:
- the label is displayed in the video frame image in a preset alarm mode.
- the system may further include: a third collection device;
- the third collection device is configured to collect a detailed image corresponding to the panoramic image, where the panoramic image is a video frame image collected by the first collection device;
- the image processing device is further configured to acquire a detail image collected by the third collection device; and determine, according to a pixel point correspondence relationship between the detail image and the video frame image, the target location is corresponding to a position in the detail image as a to-be-processed location; adding a tag added at the target location to a to-be-processed location corresponding to the target location; according to a preset display rule, the tagged video frame image, and The detailed image after the label is added for display.
- the image processing device acquires a video frame image collected by the first collection device, adds a label to a target position in the video frame image, and then displays the video frame image after the label is added; Help users understand the specific content contained in the video frame image. Therefore, the video frame image after the label is added can display the image content more intuitively, and the display effect is better.
- the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, implements any of the above image processing methods.
- the embodiment of the present application also provides an executable program code for being executed to execute any of the image processing methods described above.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
Description
Claims (24)
- 一种图像处理方法,其特征在于,包括:An image processing method, comprising:针对第一采集设备采集的视频帧图像,在所述视频帧图像中确定至少一个目标位置;Determining at least one target location in the video frame image for the video frame image acquired by the first acquisition device;在所确定的每个目标位置处添加标签,所述标签根据输入指令或者第二采集设备采集的图像生成;Adding a label at each determined target location, the label being generated according to an input instruction or an image acquired by the second collection device;根据预设展示规则,对添加标签后的视频帧图像进行展示。The video frame image after the tag is displayed according to the preset display rule.
- 根据权利要求1所述的方法,其特征在于,所述视频帧图像为全景图像,所述第一采集设备对应至少一台第二采集设备,第二采集设备针对所述全景图像对应的子场景进行图像采集;The method according to claim 1, wherein the video frame image is a panoramic image, the first collecting device corresponds to at least one second collecting device, and the second collecting device is for a sub-scene corresponding to the panoramic image. Perform image acquisition;在所述视频帧图像中确定至少一个目标位置之前,所述方法还包括:Before determining at least one target location in the video frame image, the method further includes:获取第二采集设备采集的子场景图像;Obtaining a sub-scene image collected by the second collection device;根据所述子场景图像,生成标签;Generating a label according to the sub-scene image;在所述视频帧图像中确定至少一个目标位置的步骤,包括:The step of determining at least one target location in the video frame image includes:根据预先获取的所述第一采集设备与第二采集设备的标定信息,确定第二采集设备对应的标签在所述全景图像中的目标位置。And determining, according to the calibration information of the first collection device and the second collection device that are acquired in advance, a target location of the label corresponding to the second collection device in the panoramic image.
- 根据权利要求2所述的方法,其特征在于,所述第一采集设备为增强现实AR全景相机。The method of claim 2 wherein said first acquisition device is an augmented reality AR panoramic camera.
- 根据权利要求2所述的方法,其特征在于,所述根据所述子场景图像,生成标签的步骤,包括:The method according to claim 2, wherein the step of generating a label according to the sub-scene image comprises:将所述子场景图像和/或所述子场景图像中的目标信息添加至所述标签的内容。Adding the sub-scene image and/or target information in the sub-scene image to the content of the tag.
- 根据权利要求4所述的方法,其特征在于,所述将所述子场景图像中的目标信息添加至所述标签的内容的步骤,包括:The method according to claim 4, wherein the step of adding the target information in the sub-scene image to the content of the tag comprises:对所述子场景图像进行识别,根据识别结果,确定出所述子场景图像中的目标信息;将所述目标信息添加至所述标签的内容;Identifying the sub-scene image, determining target information in the sub-scene image according to the recognition result; adding the target information to the content of the label;或者,接收第二采集设备发送的所述目标信息;将所述目标信息添加至所述标签的内容;Or receiving the target information sent by the second collection device; adding the target information to the content of the label;或者,接收与第二采集设备通信连接的服务器发送的所述目标信息;将所述目标信息添加至所述标签的内容。Or receiving the target information sent by a server communicatively coupled to the second collection device; adding the target information to the content of the tag.
- 根据权利要求1所述的方法,其特征在于,所述根据预设展示规则,对添加标签后的视频帧图像进行展示的步骤,包括:The method according to claim 1, wherein the step of displaying the tagged video frame image according to the preset display rule comprises:在第一区域中,展示添加标签后的视频帧图像;In the first area, displaying the video frame image after the tag is added;在第二区域中,展示所添加标签的内容。In the second area, the content of the added tag is displayed.
- 根据权利要求1所述的方法,其特征在于,所述根据预设展示规则,对添加标签后的视频帧图像进行展示的步骤,包括:The method according to claim 1, wherein the step of displaying the tagged video frame image according to the preset display rule comprises:以画中画的形式,展示添加标签后的视频帧图像、以及所添加标签的内容。In the form of a picture-in-picture, the video frame image after the tag is added, and the content of the added tag.
- 根据权利要求6或7所述的方法,其特征在于,展示所添加标签的内容,包括:The method according to claim 6 or 7, wherein displaying the content of the added tag comprises:在所添加的标签中,确定当前展示标签;In the added tag, determine the current display tag;展示所述当前展示标签的内容。Show the content of the current display tag.
- 根据权利要求6或7所述的方法,其特征在于,所述方法还包括:The method according to claim 6 or 7, wherein the method further comprises:在检测到用户点击所述视频帧图像中的标签后,将被点击标签确定为目标标签;After detecting that the user clicks on the label in the video frame image, the clicked label is determined as the target label;在所述视频帧图像中展示所述目标标签的内容。The content of the target tag is displayed in the video frame image.
- 根据权利要求1所述的方法,其特征在于,在所述视频帧图像中确定至少一个目标位置的步骤之前,所述方法还包括:The method of claim 1 wherein prior to the step of determining at least one target location in said video frame image, said method further comprising:接收标签添加指令;Receiving a tag addition instruction;根据所述标签添加指令,生成标签;Generating a label according to the label adding instruction;在所述视频帧图像中确定至少一个目标位置的步骤,包括:The step of determining at least one target location in the video frame image includes:根据所述标签添加指令,确定所添加标签的目标位置。A target location of the added tag is determined according to the tag addition instruction.
- 根据权利要求1所述的方法,其特征在于,所述根据预设展示规则,对添加标签后的视频帧图像进行展示的步骤,包括:The method according to claim 1, wherein the step of displaying the tagged video frame image according to the preset display rule comprises:根据预设图层分类策略,确定每个标签对应的图层;Determining a layer corresponding to each label according to a preset layer classification policy;确定图层展示策略,根据所述图层展示策略,确定当前展示图层、及所述当前展示图层的展示方式;Determining a layer display strategy, and determining, according to the layer display strategy, a current display layer and a display manner of the current display layer;以所述展示方式,对所述当前展示图层对应的标签进行展示。In the display manner, the label corresponding to the current display layer is displayed.
- 根据权利要求2所述的方法,其特征在于,所述获取所述第二采集设备采集的子场景图像的步骤,包括:The method according to claim 2, wherein the step of acquiring the sub-scene image collected by the second collection device comprises:检测所述全景图像中是否发生异常事件;Detecting whether an abnormal event occurs in the panoramic image;如果是,确定所述异常事件对应的目标第二采集设备;If yes, determining a target second collection device corresponding to the abnormal event;获取所述目标第二采集设备采集的子场景图像;Obtaining a sub-scene image collected by the target second collection device;根据所述子场景图像,生成标签的步骤,包括:According to the sub-scene image, the step of generating a label includes:根据所述子场景图像,生成所述异常事件对应的标签。And generating, according to the sub-scene image, a label corresponding to the abnormal event.
- 根据权利要求12所述的方法,其特征在于,所述检测所述全景图像中是否发生异常事件的步骤,包括:The method according to claim 12, wherein the step of detecting whether an abnormal event occurs in the panoramic image comprises:将所述全景图像与预设异常模型进行匹配;Matching the panoramic image with a preset anomaly model;如果匹配成功,则表示所述全景图像中发生异常事件;If the matching is successful, it indicates that an abnormal event occurs in the panoramic image;或者,判断是否接收到针对所述全景图像的异常事件报警信息;Or determining whether abnormal event alarm information for the panoramic image is received;如果接收到,则表示所述全景图像中发生异常事件。If received, an abnormal event occurs in the panoramic image.
- 根据权利要求12所述的方法,其特征在于,所述确定所述异常事件对应的目标第二采集设备的步骤,包括:The method according to claim 12, wherein the step of determining the target second collection device corresponding to the abnormal event comprises:确定所述异常事件在所述全景图像中的位置;Determining a location of the abnormal event in the panoramic image;根据预先获取的所述第一采集设备与每台第二采集设备的标定信息,确 定与所述位置相对应的目标第二采集设备。Determining a target second collection device corresponding to the location according to the pre-acquired calibration information of the first collection device and each second collection device.
- 根据权利要求12所述的方法,其特征在于,在检测到所述全景图像中发生异常事件的情况下,所述方法还包括:The method according to claim 12, wherein in the case that an abnormal event occurs in the panoramic image is detected, the method further comprises:判断所述异常事件在所述全景图像中的位置是否位于预设重点区域;Determining whether the location of the abnormal event in the panoramic image is located in a preset focus area;如果是,所述根据预设展示规则,对添加标签后的视频帧图像进行展示的步骤,包括:If yes, the step of displaying the tagged video frame image according to the preset display rule includes:以预设报警方式,在视频帧图像中展示所述标签。The label is displayed in the video frame image in a preset alarm mode.
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method of claim 1 further comprising:获取第一采集设备采集的视频帧图像对应的细节图像;Obtaining a detail image corresponding to the video frame image collected by the first collection device;在所述视频帧图像中确定至少一个目标位置之后,还包括:After determining the at least one target location in the video frame image, the method further includes:根据预先获取的所述细节图像与所述视频帧图像之间的像素点对应关系,确定所述目标位置对应到所述细节图像中的位置,作为待处理位置;Determining, according to a pixel point correspondence relationship between the detail image and the video frame image acquired in advance, that the target position corresponds to a position in the detail image as a to-be-processed position;将所述目标位置处添加的标签添加到所述目标位置对应的待处理位置;Adding a label added at the target location to a to-be-processed location corresponding to the target location;所述根据预设展示规则,对添加标签后的视频帧图像进行展示,包括:And displaying, according to the preset display rule, the video frame image after the label is added, including:根据预设展示规则,对添加标签后的视频帧图像、以及添加标签后的细节图像进行展示。The video frame image after the tag is added and the detailed image after the tag is displayed according to the preset display rule.
- 根据权利要求16所述的方法,其特征在于,所述根据预设展示规则,对添加标签后的视频帧图像、以及添加标签后的细节图像进行展示,包括:The method according to claim 16, wherein the displaying the tagged video frame image and the tagged detail image according to the preset display rule comprises:在第一区域中,展示添加标签后的视频帧图像;在第三区域中,展示添加标签后的细节图像;In the first area, displaying the video frame image after the label is added; in the third area, displaying the detailed image after adding the label;或者,以画中画的形式,展示添加标签后的视频帧图像、以及添加标签后的细节图像。Or, in the form of a picture-in-picture, the video frame image after the tag is added, and the detail image after the tag is added.
- 一种图像处理设备,其特征在于,包括:处理器和存储器;An image processing device, comprising: a processor and a memory;存储器,用于存放计算机程序;a memory for storing a computer program;处理器,用于执行存储器上所存放的程序时,实现权利要求1-17任一项所 述的图像处理方法。The processor, when executed to execute a program stored on the memory, implements the image processing method according to any one of claims 1-17.
- 一种图像处理系统,其特征在于,包括:第一采集设备和图像处理设备,其中,An image processing system, comprising: a first collection device and an image processing device, wherein所述第一采集设备,用于采集视频帧图像,并将所采集的视频帧图像发送至所述图像处理设备;The first collecting device is configured to collect a video frame image, and send the collected video frame image to the image processing device;所述图像处理设备,用于针对第一采集设备采集的视频帧图像,在所述视频帧图像中确定至少一个目标位置;在所确定的每个目标位置处添加标签,所述标签根据输入指令或者第二采集设备采集的图像生成;根据预设展示规则,对添加标签后的视频帧图像进行展示。The image processing device is configured to determine, according to a video frame image acquired by the first collection device, at least one target location in the video frame image; adding a label at each determined target location, the label according to the input instruction Or the image collected by the second collection device is generated; and the video frame image after the label is added is displayed according to the preset display rule.
- 根据权利要求19所述的系统,其特征在于,所述系统还包括:至少一台第二采集设备,The system of claim 19, wherein the system further comprises: at least one second collection device,所述第二采集设备,用于针对全景图像对应的子场景进行图像采集,所述全景图像为所述第一采集设备所采集的视频帧图像;The second collection device is configured to perform image collection on a sub-scene corresponding to the panoramic image, where the panoramic image is a video frame image collected by the first collection device;所述图像处理设备,还用于获取第二采集设备采集的子场景图像;根据所述子场景图像,生成标签;根据预先获取的所述第一采集设备与第二采集设备的标定信息,确定第二采集设备对应的标签在所述全景图像中的目标位置。The image processing device is further configured to acquire a sub-scene image acquired by the second collection device; generate a label according to the sub-scene image; and determine, according to the calibration information of the first collection device and the second collection device acquired in advance The label corresponding to the second collection device is at a target position in the panoramic image.
- 根据权利要求19所述的系统,其特征在于,所述第一采集设备为增强现实AR全景相机。The system of claim 19 wherein said first acquisition device is an augmented reality AR panoramic camera.
- 根据权利要求19所述的系统,其特征在于,所述系统还包括:第三采集设备;The system of claim 19, wherein the system further comprises: a third collection device;所述第三采集设备,用于采集全景图像对应的细节图像,所述全景图像为所述第一采集设备所采集的视频帧图像;The third collection device is configured to collect a detailed image corresponding to the panoramic image, where the panoramic image is a video frame image collected by the first collection device;所述图像处理设备,还用于获取所述第三采集设备采集的细节图像;根据预先获取的所述细节图像与所述视频帧图像之间的像素点对应关系,确定所述目标位置对应到所述细节图像中的位置,作为待处理位置;将所述目标位置处添加的标签添加到所述目标位置对应的待处理位置;根据预设展示规 则,对添加标签后的视频帧图像、以及添加标签后的细节图像进行展示。The image processing device is further configured to acquire a detail image collected by the third collection device; and determine, according to a pixel point correspondence relationship between the detail image and the video frame image, the target location is corresponding to a position in the detail image as a to-be-processed location; adding a tag added at the target location to a to-be-processed location corresponding to the target location; according to a preset display rule, the tagged video frame image, and The detailed image after the label is added for display.
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-17任一所述的方法步骤。A computer readable storage medium, wherein the computer readable storage medium stores a computer program, the computer program being executed by a processor to implement the method steps of any of claims 1-17.
- 一种可执行程序代码,其特征在于,所述可执行程序代码用于被运行以执行权利要求1-17任一所述的方法步骤。An executable program code, characterized in that the executable program code is operative to perform the method steps of any of claims 1-17.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810272370.9 | 2018-03-29 | ||
CN201810272370.9A CN109274926B (en) | 2017-07-18 | 2018-03-29 | Image processing method, device and system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019184275A1 true WO2019184275A1 (en) | 2019-10-03 |
Family
ID=68062694
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/106752 WO2019184275A1 (en) | 2018-03-29 | 2018-09-20 | Image processing method, device and system |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019184275A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102457718A (en) * | 2010-10-14 | 2012-05-16 | 霍尼韦尔国际公司 | Graphical bookmarking of video data with user inputs in video surveillance |
CN103929618A (en) * | 2014-04-18 | 2014-07-16 | 卢旭东 | Operational control method for outdoor advertising board state marking system |
CN104285244A (en) * | 2012-05-23 | 2015-01-14 | 高通股份有限公司 | Image-driven view management for annotations |
US20170364747A1 (en) * | 2016-06-15 | 2017-12-21 | International Business Machines Corporation | AUGEMENTED VIDEO ANALYTICS FOR TESTING INTERNET OF THINGS (IoT) DEVICES |
-
2018
- 2018-09-20 WO PCT/CN2018/106752 patent/WO2019184275A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102457718A (en) * | 2010-10-14 | 2012-05-16 | 霍尼韦尔国际公司 | Graphical bookmarking of video data with user inputs in video surveillance |
CN104285244A (en) * | 2012-05-23 | 2015-01-14 | 高通股份有限公司 | Image-driven view management for annotations |
CN103929618A (en) * | 2014-04-18 | 2014-07-16 | 卢旭东 | Operational control method for outdoor advertising board state marking system |
US20170364747A1 (en) * | 2016-06-15 | 2017-12-21 | International Business Machines Corporation | AUGEMENTED VIDEO ANALYTICS FOR TESTING INTERNET OF THINGS (IoT) DEVICES |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109274926B (en) | Image processing method, device and system | |
US10043079B2 (en) | Method and apparatus for providing multi-video summary | |
CN104137154B (en) | Systems and methods for managing video data | |
US20110109747A1 (en) | System and method for annotating video with geospatially referenced data | |
CN110536074B (en) | Intelligent inspection system and inspection method | |
US8929596B2 (en) | Surveillance including a modified video data stream | |
CN110557603B (en) | Method and device for monitoring moving target and readable storage medium | |
CN110136091B (en) | Image processing method and related product | |
KR101652856B1 (en) | Apparatus for providing user interface screen based on control event in cctv | |
CN101272483B (en) | System and method for managing moving surveillance cameras | |
EP3062506B1 (en) | Image switching method and apparatus | |
CN112162683A (en) | Image amplification method and device and storage medium | |
JP2019125053A (en) | Information terminal device, information processing system, and display control program | |
KR100653825B1 (en) | Change detecting method and apparatus | |
JP4632362B2 (en) | Information output system, information output method and program | |
WO2019184275A1 (en) | Image processing method, device and system | |
KR101842564B1 (en) | Focus image surveillant method for multi images, Focus image managing server for the same, Focus image surveillant system for the same, Computer program for the same and Recording medium storing computer program for the same | |
CN110737385A (en) | video mouse interaction method, intelligent terminal and storage medium | |
CN113905211B (en) | Video patrol method, device, electronic equipment and storage medium | |
US20210375109A1 (en) | Team monitoring | |
US20230162591A1 (en) | Interactive kiosk with emergency call module | |
US20030112415A1 (en) | Apparatus for projection and capture of a display interface | |
KR20200073669A (en) | Method for managing image information, Apparatus for managing image information and Computer program for the same | |
KR102398280B1 (en) | Apparatus and method for providing video of area of interest | |
CN114677163A (en) | Advertisement interaction method, device, medium and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18911456 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18911456 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18911456 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18911456 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.05.2021) |