Multi-view shape estimation of transparent containers

Published in Submitted to IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, May 4-8, 2020

Recommended citation: A. Xompero, R. Sanchez-Matilla, A. Modas, P. Frossard, A. Cavallaro. "Multi-view shape estimation of transparent containers." Proc. of IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP).

Abstract The 3D localisation of an object and the estimation of its properties, such as shape and dimensions, are challenging under varying degrees of transparency and lighting conditions. In this paper, we propose a method for jointly localising container-like objects and estimating their dimensions using two wide-baseline, calibrated RGB cameras. Under the assumption of vertical circular symmetry, we estimate the dimensions of an object by sampling at different heights a set of sparse circumferences with iterative shape fitting and image re-projection to verify the sampling hypotheses in each camera using semantic segmentation masks. We evaluate the proposed method on a novel dataset of objects with different degrees of transparency and captured under different backgrounds and illumination conditions. Our method, which is based on RGB images only outperforms, in terms of localisation success and dimension estimation accuracy a deep-learning based approach that uses depth maps.

Sample image Sample image

Links Website Dataset Paper [Code coming soon]

Video