*Object Classification (ObjectC)
Technical Article | TA-20201002-TP-20 VDG Sense | Video Content Analyses | ObjectC |
Introduction
ObjectC is the server based Object Classification algorithm to classify objects in streaming video using Deep Learning techniques. The object types which can be detected and classified are the following:
Person,
Car,
Bicyle,
Motorbike,
Bus,
Train,
Truck,
Boat,
Using detection zones it is possible to trigger alarms if more than a number of these objects are in a zone for a certain amount of time. Practical examples could be:
Crowd Management applications
Example: Trigger an alarm if more than 10 people are in a zone for more than 10 seconds.
Vehicle type detection
Example: Trigger an alarm if a single bicycle is detected in a zone.
Stop and Go applications
Example: Trigger an alarm if a car is a zone for more than 30 seconds
The algorithm works best in light-controlled environments where the objects to be detected are clearly visible, in vicinity of the camera (20-30m) and the angle of camera is around 45 degrees.
ObjectC generates events with type ‘ObjectC’ and are visible in the eventlist, timeline and event search layout and can be used in the Macro engine for automatic procedures.
Licensing
ObjectC is a licensed feature per camera channel and can be enabled for any camera in the VDG Sense configuration.
Hardware Requirements
ObjectC requires a NVIDIA CUDA capable videocard with at least 4GB of memory. If the server does not have this, ObjectC cannot be enabled. VDG Sense will inform user if a CUDA capable card is not present. When enabling ObjectC and server does not have videocard which has the minimum requirements the following dialog will appear.
Learn more about NVIDIA CUDA cards at developer.nividia.com.
Setup
Enabling and configuring the ObjectC algorithm is done in the Setup ->Devices tab by checking checkbox ‘Enabled’. If the license allows it and the server has minimum hardware requirements for ObjectC the checkbox remains checked.
Settings
After enabling the ObjectC algorithm several other settings become available. See below for an overview where two zones are configured.
Zone placement:
On the upper left side two icons are available to configure the zones. The top icon is used to add up to eight zones. The lower icon is to remove the selected zone. See below.
The zones itself can be moved around by placing the mouse cursor in the middle of the zone, hold the left mouse button and then move the zone around. The size and shape of the zone can be changed by holding the left mouse button while the cursor is on the small circles at the corner of the zones. Extra corners can be added or removed by left-double-clicking anywhere on the edges of the zone. See below.
ObjectC Settings
Algorithm Framerate factor
There are limitations on how many cameras/frames can be processed on a single system. The ‘Algorithm Framerate factor’ helps in lowering the amount of processed frames by means of dividing the camera framerate by this setting. A factor of 1 means all frames are processed (25fps / 1 is 25 fps for processing). A factor of 5 means one out of 5 frames are processed (25fps / 5 is 5 fps for processing). The actual processed frames is affected by the actual framerate of the videostream.
If performance issues start to occur this value need to be increased in order to reduce the load. See below for an example of windows task manager showing CUDA load of GPU card. A load of max 70% is recommended.
Show Detected Objects
By default the object boxes which identify the object in the videostream is disabled. In order to show the object boxes and see the actual classification the setting ‘Show detected objects’ need to be enabled. When enabled and a value higher then 1 is configured for the framerate factor the actual processing framerate will be visible in the videostream; which is lower than the camera fps.
Shape Settings
Each zone or shape can be configured with certain parameters defining the the behavior of when an event is generated. The shape settings are shown after selecting a zone in the videostream on the right side of the screen. The combination of these conditions define when an ObjectC event sill be generated. See below.
Event Text
The custom text for this zone which is shown in the event value.
Ex: ‘Zone 1: Persons detected’Object Type
Define object type to be taken into account for event generation. Either Person, Car, Bicyle, Motorbike, Bus, Train, Truck, BoatTrigger Point
Define trigger point of the object. Either center of gravity which means the center of the object needs to be in the zone. Or bottom center which means the bottom of the object needs to be in the zone to be taken into account for event generation.Minimum Object Count
The amount of objects (based on trigger point) which need to be in the zone to be taken into account for event generation.Detect Delay
The amount of time the objects (based on trigger point and object count) need to be in the zone to be taken into account for event generation.Glue Events Within
Define multiple events in this time frame as a single eventMax Event Duration
The maximum amount of time events can be glued before a new event must be triggered