Menu Close

PROJECTS

Building Height Estimation from Satellite Images

 This project aims to estimate the height of buildings using satellite images, which is crucial for improving communication systems by addressing line-of-sight (LOS) losses caused by tall buildings and geographical terrain features. The project utilizes monocular RGB images from databases like OpenStreetMap and Huawei Petal Maps, annotated with building heights to create a dataset. This dataset then facilitates the training of a model that includes a building detector and a height regression component. The building detector identifies building contours, while the height regression predicts their height. The development process involves preparing the images, running evaluations, and training the multi-task learning model to refine accuracy. The final output is a building height map that is especially beneficial for areas where LOS communication is critical, such as towns and suburban areas, villages and rural regions, and city centers. This project can help optimize wireless network designs and other applications where knowledge of building heights is essential.

Smart Camera Systems for Wide Area Surveillance

This project focuses on the design and development of a smart, real-time embedded system for wide-area surveillance, employing advanced image and video analytics. Such systems have become increasingly important for critical applications such as urban security management, traffic monitoring, and border protection. They typically involve a network of multiple cameras with varying viewing angles or specialized sensors mounted on an aerial vehicle to cover extensive areas. By integrating the views from these cameras, we can generate a comprehensive image of the area under surveillance. This integration is crucial in preventing criminal and terrorist acts, as well as identifying traffic infractions within observed zone. Click here for more details.

Model-free Solution of Inverse Problems in Image/Video Processing using Deep Learning

This project is centered around harnessing deep learning for a universal framework capable of tackling a variety of inverse problems in computer vision and image processing, without relying on specific physical models. Deep learning has already proven to outclass traditional variational approaches in literature for challenges such as image denoising, sharpening, and super-resolution. Our project’s breakthrough is in developing a deep learning architecture that is largely agnostic to the model parameters, setting it apart from existing methods. This architecture will be versatile and easily adaptable to a range of different inverse problems. Click here for more details.

Multispectral Image Registration

Multispectral image registration is a crucial task that involves aligning images from different spectral bands, such as thermal and optical. Accurate alignment is essential for various applications, including autonomous navigation, environmental monitoring, agricultural surveillance, and disaster response. Achieving precise alignment in real-time is a significant challenge that traditional methods often struggle to meet due to their computational complexity. We plan to address this challenge by developing state-of-art deep learning-based solutions for multispectral image registration. By enhancing the speed and quality of cross-spectral matches, we aim to achieve real-time performance that can significantly improve the efficiency and robustness of autonomous navigation systems and other applications relying on multispectral data.