Pixel2Volume: Camera-Only Volumetry for IV Bags and Lab Containers

MSc assignment

Not just shapes, but volumes: towards measuring volume using only a camera

Today, estimating how much liquid remains in a bag or container still depends on contact sensors, manual reading, or custom hardware. This project poses a simple yet ambitious question:

Can a single camera – without any markers or physical modifications – serve as a reliable “volume meter” for everyday containers?

The idea is to point a monocular camera at items such as IV bags, laboratory bottles, or simple packaging. The system fuses depth information over time (either from an actual depth stream or from depth estimated by modern monocular depth networks from RGB images) to reconstruct a 3D model of the container and its fill level. On top of that, it estimates absolute volume, rate of change, and generates volume–time curves, while also providing short-term predictions (for example, “the IV bag will be empty in 7 minutes”).

The scientific core is not just to build a nice demo, but to answer questions such as:

  • Under real-world conditions (changing illumination, varying viewpoints, deforming bags), how accurate can volume measurement be using only a camera?

  • How can segmentation, monocular depth, and simple geometric priors be combined to keep the method lightweight in calibration yet robust in practice?

  • How do uncertainty and noise along the depth dimension propagate into volume errors, and how can we compensate for or overcome them?

A successful paper should include:

  • A working prototype for non-contact volume measurement of IV bags, laboratory containers, or simple packaging,

  • Quantitative evaluation against ground-truth standards, and

  • Design guidelines showing when camera-only volume sensing is sufficient to drive alarms, automatic stopping, or closed-loop control.

In short: moving from a single camera to actionable volume intelligence — not only seeing what is there, but also how much is there, and for how long.