Hanwha Techwin to showcase its video surveillance solutions at Intersec 2022

Hanwha Techwin to showcase its video surveillance solutions at Intersec 2022

Global security company, Hanwha Techwin will be attending Intersec 2022 from 16-18 January to showcase its range of surveillance solutions built on cutting-edge artificial intelligence (AI) technologies.

During its time at Intersec, Hanwha Techwin will be showcasing its products that are particularly suited to surveillance needs in the urban Middle East and North Africa region, including traffic detection, accident detection, littering detection and more.

Sungjae Lee, Managing Director of Hanwha Techwin Middle East FZE, said: “The MENA region is a crucial priority for Hanwha Techwin and our leadership position for AI in the video security industry.

“Hanwha Techwin established its AI lab early on to gain competitiveness in AI and develop relevant technologies. Hanwha Techwin will make continual investments in advanced AI-based open platform to help customers in the MENA expand and utilise the solutions they need.”

Some of the key products the company will be exhibiting will be the New X Core and X Plus series of cameras which incorporate AI-powered analytics for superior operator efficiency. This delivers real-time event notifications and post-event search capabilities for thorough and efficient monitoring.

The New X series uses AI to detect and classify people, vehicles, faces, license plates and more in real-time. The X Plus range builds on the promise of modular designs. Users with existing X Plus cameras can simply attach AI-powered X Plus camera modules in the optimal position.

The company will also be highlighting the benefits of AI technology in relation to image quality. Operators can use WiseNRII, an AI-powered noise reduction technology, to tailor image settings according to local conditions. It can help achieve greater image clarity in noisy, low-light environments. Additionally Preferred Shutter AI tech automatically adjusts the shutter speed based on classified objects in motion and lighting conditions to reduce motion blur.